Files
sunnypilot/third_party/snpe/include/DlSystem/ITensorFactory.hpp
Jason Wen acd46aa94b modeld: retain SNPE and thneed drive model support (#555)
* modeld: Retain pre-20hz drive model support

* Method not available anymore on OP

* some fixes

* Revert "Long planner get accel: new function args (#34288)"

* Revert "Fix low-speed allow_throttle behavior in long planner (#33894)"

* Revert "long planner: allow throttle reflects usage (#33792)"

* Revert "Gate acceleration on model gas press predictions (#33643)"

* Reapply "Gate acceleration on model gas press predictions (#33643)"

This reverts commit 76b08e37cb.

* Reapply "long planner: allow throttle reflects usage (#33792)"

This reverts commit c75244ca4e.

* Reapply "Fix low-speed allow_throttle behavior in long planner (#33894)"

This reverts commit b2b7d21b7b.

* Reapply "Long planner get accel: new function args (#34288)"

This reverts commit 74dca2fccf.

* don't need

* retain snpe

* wrong

* they're symlinks

* remove

* put back into VCS

* add back

* don't include built

* Refactor model runner retrieval with caching support

Added caching for active model runner type via `ModelRunnerTypeCache` to enhance performance and avoid redundant checks. Introduced a `force_check` flag to bypass the cache when necessary. Updated related code to handle cache clearing during onroad transitions.

* Update model runner determination logic with caching fix

Enhances `get_active_model_runner` to utilize caching more effectively by ensuring type consistency and updating cache only when necessary. Also updates `is_snpe_model` to pass the `started` state to the runner determination function, improving behavior for dynamic checks.

* default to none

* enable in next PR

* more

---------

Co-authored-by: DevTekVE <devtekve@gmail.com>
2025-01-10 18:34:06 -05:00

93 lines
2.6 KiB
C++

//=============================================================================
//
// Copyright (c) 2015-2016 Qualcomm Technologies, Inc.
// All Rights Reserved.
// Confidential and Proprietary - Qualcomm Technologies, Inc.
//
//=============================================================================
#ifndef _ITENSOR_FACTORY_HPP
#define _ITENSOR_FACTORY_HPP
#include "ITensor.hpp"
#include "TensorShape.hpp"
#include "ZdlExportDefine.hpp"
#include <istream>
namespace zdl {
namespace DlSystem
{
class ITensor;
class TensorShape;
}
}
namespace zdl { namespace DlSystem
{
/** @addtogroup c_plus_plus_apis C++
@{ */
/**
* Factory interface class to create ITensor objects.
*/
class ZDL_EXPORT ITensorFactory
{
public:
virtual ~ITensorFactory() = default;
/**
* Creates a new ITensor with uninitialized data.
*
* The strides for the tensor will match the tensor dimensions
* (i.e., the tensor data is contiguous in memory).
*
* @param[in] shape The dimensions for the tensor in which the last
* element of the vector represents the fastest varying
* dimension and the zeroth element represents the slowest
* varying, etc.
*
* @return A pointer to the created tensor or nullptr if creating failed.
*/
virtual std::unique_ptr<ITensor>
createTensor(const TensorShape &shape) noexcept = 0;
/**
* Creates a new ITensor by loading it from a file.
*
* @param[in] input The input stream from which to read the tensor
* data.
*
* @return A pointer to the created tensor or nullptr if creating failed.
*
*/
virtual std::unique_ptr<ITensor> createTensor(std::istream &input) noexcept = 0;
/**
* Create a new ITensor with specific data.
* (i.e. the tensor data is contiguous in memory). This tensor is
* primarily used to create a tensor where tensor size can't be
* computed directly from dimension. One such example is
* NV21-formatted image, or any YUV formatted image
*
* @param[in] shape The dimensions for the tensor in which the last
* element of the vector represents the fastest varying
* dimension and the zeroth element represents the slowest
* varying, etc.
*
* @param[in] data The actual data with which the Tensor object is filled.
*
* @param[in] dataSize The size of data
*
* @return A pointer to the created tensor
*/
virtual std::unique_ptr<ITensor>
createTensor(const TensorShape &shape, const unsigned char *data, size_t dataSize) noexcept = 0;
};
}}
/** @} */ /* end_addtogroup c_plus_plus_apis C++ */
#endif