Skip to content

PR: Refine ggml-hexagon backend(Qualcomm Hexagon NPU backend) for latest ggml,whisper.cpp,llama.cpp #12326

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 146 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
146 commits
Select commit Hold shift + click to select a range
ef343cc
ggml-qnn: add Qualcomm QNN backend for GGML
zhouwg Feb 14, 2025
8015ad7
ggml-qnn: santiy check
zhouwg Feb 15, 2025
4137ed1
ggml-qnn: update script build-run-android.sh to compare peformance of…
zhouwg Feb 16, 2025
436c599
ggml-qnn: fix minor issue in test-backend-ops.cpp
zhouwg Feb 17, 2025
7258496
ggml-qnn: merge QNN RPC feature from https://github.com/zhouwg/kantv/…
zhouwg Feb 18, 2025
b41d84e
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 18, 2025
d91f1ac
ggml-qnn: a concise approach to offload mulmat to QNN backend(sync fr…
zhouwg Feb 19, 2025
835a9b4
ggml-qnn: remove redundant codes
zhouwg Feb 20, 2025
d563e40
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
53ca7c0
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
d3efd1a
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 21, 2025
bcd5ee8
ggml-qnn: add Qualcomm QNN backend for GGML
zhouwg Feb 14, 2025
5ccb9f2
ggml-qnn: merge QNN RPC feature from https://github.com/zhouwg/kantv/…
zhouwg Feb 18, 2025
513141f
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 18, 2025
c8455ea
ggml-qnn: a concise approach to offload mulmat to QNN backend(sync fr…
zhouwg Feb 19, 2025
1e94524
ggml-qnn: remove redundant codes
zhouwg Feb 20, 2025
10014c4
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
6d01dc1
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 20, 2025
c750cc5
ggml-qnn: sync from branch kantvai-ggmlqnn-npurpc
zhouwg Feb 21, 2025
36d9a23
ggml-qnn: fix a minior typo in internal doc
zhouwg Feb 23, 2025
d9a5e0f
ggml-qnn: refine function ggml_qnn_create_general_tensor() to avoid c…
zhouwg Feb 23, 2025
6281630
ggml-qnn: fix a minor typo in source code
zhouwg Feb 24, 2025
f1cb636
build: avoid ggml-qnn backend breaking other backend's builds
zhouwg Feb 24, 2025
183099d
ggml-qnn: remove redundant codes to make PR reviewers happy
zhouwg Feb 25, 2025
8812e72
ggml-qnn: refine code format
zhouwg Feb 25, 2025
48449ae
ggml-qnn: offload quantized type mulmat to QNN backend
zhouwg Feb 26, 2025
c208133
ggml-qnn: refine source code structure to make code more clearly
zhouwg Feb 27, 2025
24c31ff
ggml-qnn: enable release build with necessary logs to make reviewers …
zhouwg Feb 27, 2025
e874a5b
ggml-qnn: enable all quantize type with 2d mulmat
zhouwg Feb 27, 2025
ed37e16
ggml-qnn: enable log output of GGMLQNN_LOG_INFO in command line mode …
zhouwg Feb 28, 2025
d290dc5
ggml-qnn: Windows port --- step2
zhouwg Feb 28, 2025
3668810
ggml-qnn: merge UT code and corresponding script from local dev branc…
zhouwg Mar 2, 2025
12f0438
ggml-qnn: merge ggml_qnn_mul_mat_4d from local dev branch to make wor…
zhouwg Mar 2, 2025
e9cc7ba
ggml-qnn: submit AI-assisted ggml_qnn_mul_mat_4d(not worked currently…
zhouwg Mar 2, 2025
0dbd545
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step2
zhouwg Mar 2, 2025
5745fad
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step3
zhouwg Mar 2, 2025
e700d2a
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step4
zhouwg Mar 2, 2025
e5fdcb6
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step5
zhouwg Mar 2, 2025
f53a27c
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step6
zhouwg Mar 2, 2025
c8a8775
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step7
zhouwg Mar 2, 2025
1c1e8d9
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step8
zhouwg Mar 2, 2025
9796e3d
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- good in step9
zhouwg Mar 2, 2025
ab6a2ec
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- narrow down t…
zhouwg Mar 2, 2025
df2551d
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step10
zhouwg Mar 2, 2025
e603942
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- narrow down t…
zhouwg Mar 2, 2025
02bc00f
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- step11
zhouwg Mar 2, 2025
13b2f5c
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 --- both ok in st…
zhouwg Mar 2, 2025
3d92078
ggml-qnn: AI-assisted ggml_qnn_mul_mat_4d by Grok 3 ---finalizing ver…
zhouwg Mar 2, 2025
e2bdef3
ggml-qnn: refine ggml_qnn_mul_mat and ggml_qnn_general_node according…
zhouwg Mar 2, 2025
7df6c41
ggml-qnn: remove no-needed comments
zhouwg Mar 2, 2025
6fad271
ggml-qnn: Windows port --- step3
zhouwg Mar 3, 2025
dc6f5e3
ggml-qnn: remove un-needed function
zhouwg Mar 4, 2025
a884d43
ggml-qnn:rebase to upstream
zhouwg Mar 4, 2025
4502022
ggml-qnn: fix a minior issue during rebase to upstream
zhouwg Mar 4, 2025
d3ced9b
ggml-qnn: update script according to https://github.com/ggml-org/llam…
zhouwg Mar 4, 2025
db58469
ggml-qnn: fix a minior issue in ggmlqnn_create_general_tensor()
zhouwg Mar 4, 2025
d6c6d07
ggml-qnn: active member variable _device_id in class qnn_instance
zhouwg Mar 4, 2025
c73cf15
ggml-qnn: refine ggml_qnn_general_node and ggml_qnn_mul_mat to make c…
zhouwg Mar 4, 2025
9ff652a
ggml-qnn: Windows port --- step4
zhouwg Mar 6, 2025
05b68df
ggml-qnn: Windows port -- step5
zhouwg Mar 7, 2025
5dc4b4e
ggml-qnn: WoA(Windows on ARM) -- step6
zhouwg Mar 8, 2025
b13576a
ggml-qnn: rebase to upstream
zhouwg Mar 9, 2025
f655720
ggml-qnn: pr to upstream
zhouwg Mar 11, 2025
8a9b88e
ggml-qnn: rebase to upstream
zhouwg Mar 18, 2025
cf88a43
ggml-qnn: self code-review
zhouwg Mar 18, 2025
0b93da8
ggml-qnn: rebase upstream
zhouwg Mar 19, 2025
c6c6563
ggml-qnn: add approach through Hexagon cDSP
zhouwg Mar 22, 2025
7b8c9d2
ggml-qnn: refine general approach through Hexagon cDSP
zhouwg Mar 23, 2025
0b5d7a5
ggml-qnn: refine the entire ggml-qnn.cpp to make code more clear
zhouwg Mar 24, 2025
9e3ef48
ggml-qnn: refine the entire ggml-qnn.cpp to make code more clear
zhouwg Mar 24, 2025
f78beb5
ggml-qnn: add build script for libggmlop_skel.so
zhouwg Mar 24, 2025
474288e
ggml-qnn: remove redundant functions in this PR and make codes more c…
zhouwg Mar 25, 2025
e45c627
ggml-qnn: original ggml_compute_forward_add and ggml_compute_forward_…
zhouwg Mar 25, 2025
d911099
ggml-qnn: modify build-run-android.sh to verify mulmat and validate m…
zhouwg Mar 25, 2025
2690a5c
ggml-qnn: make host code(ggml-qnn.cpp) more clear and more stable
zhouwg Mar 26, 2025
320ef55
ggml-qnn: refine code according to self code-review and make code mor…
zhouwg Mar 26, 2025
eb19589
ggml-qnn: offload more ggml op to Hexagon cDSP
zhouwg Mar 27, 2025
23ef20f
ggml-hexagon: code on AP(arm-cpu) side is stable now
zhouwg Mar 28, 2025
b8976d4
ggml-hexagon: optimize GGML_OP_ADD on cDSP side
zhouwg Mar 28, 2025
1835aac
ggml-hexagon: simplify hexagon-kernel build logic in CMakeLists.txt
zhouwg Mar 29, 2025
767734e
ggml-hexagon: release ggml-hexagon v0.98
zhouwg Mar 29, 2025
c5d897f
ggml-hexagon: release ggml-hexagon v0.99
zhouwg Mar 29, 2025
ea595d0
ggml-hexagon: try to offload q6_k mulmat to cDSP
zhouwg Mar 29, 2025
3897dc3
ggml-hexagon: fix minior issue in ggml-hexagon.cpp after self code-re…
zhouwg Mar 29, 2025
5201594
ggml-hexagon: check validation of ggml-hexagon.cfg before create appr…
zhouwg Mar 30, 2025
686a1c8
ggml-hexagon: fix all compiler warnings in ggml-hexagon.cpp
zhouwg Mar 30, 2025
e58cd8d
ggml-hexagon: enable only one backend device for HWACCEL_CDSP and ena…
zhouwg Mar 31, 2025
da08bfa
ggml-hexagon: rpc ion memory pool and test-backend-ops works fine in …
zhouwg Mar 31, 2025
15c1f79
ggml-hexagon: make comprision of mulmat performance between HWACCEL_Q…
zhouwg Mar 31, 2025
4f80ac9
ggml-hexagon: release ggml-hexagon v1.00
zhouwg Mar 31, 2025
b191a7b
ggml-hexagon: rebase to upstream
zhouwg Apr 1, 2025
d242bc1
ggml-hexagon: check configuration of enable_rpc_dma_mempool in functi…
zhouwg Apr 1, 2025
36754a6
ggml-hexagon: uniform rpc_ion_memsize and rpc_ion_usage between HWACC…
zhouwg Apr 1, 2025
ce047b6
ggml-hexagon: make buffer mechanism more clear in HWACCEL_CDSP approach
zhouwg Apr 1, 2025
e92ffdb
ggml-hexagon: add perf function in hexagon kernerls on cDSP side
zhouwg Apr 2, 2025
ee733dd
ggml-hexagon: fix a stupid issue of why set rpc latency failure and i…
zhouwg Apr 2, 2025
fd10234
ggml-hexagon: make helper function ggmlhexagon_get_timestring() threa…
zhouwg Apr 2, 2025
0ebec99
ggml-hexagon: fix a typo in ggml-hexagon.cpp
zhouwg Apr 2, 2025
baecc2d
ggml-hexagon: list all known todo and fixme tasks in ggml-hexagon.cpp
zhouwg Apr 2, 2025
8b58002
ggml-hexagon: fix units MB -> MiB
zhouwg Apr 2, 2025
ba4aaa9
ggml-hexagon: try to make ggml-hexagon backend works fine in a standa…
zhouwg Apr 3, 2025
fc1d9db
ggml-hexagon: remove reduament code and make debug log more clear
zhouwg Apr 3, 2025
c75df4e
ggml-hexagon: add gemma-3-4b-it-Q8_0.gguf to verify q8_0 mulmat on cDSP
zhouwg Apr 3, 2025
7fbae90
ggml-hexagon:add skeleton code of offload GGML_OP_SOFT_MAX/GGML_OP_RM…
zhouwg Apr 3, 2025
48a5ef5
ggml-hexagon: release ggml-dsp v0.60 on cDSP side
zhouwg Apr 4, 2025
07a4826
ggml-hexagon: merge build logic in kernels/Makefile to ggml-hexagon/C…
zhouwg Apr 5, 2025
3d2acf2
ggml-hexagon: fix a typo in ggml-hexagon.cpp
zhouwg Apr 5, 2025
473ea76
ggml-hexagon: uniform NDEBUG usage in ggml-hexagon.cpp and ggml-dsp.c
zhouwg Apr 6, 2025
9ebc58e
ggml-hexagon: add profiler feature for purpose of visualize NPU perfo…
zhouwg Apr 7, 2025
c9ecd60
ggml-hexagon: remove so-called dma memory pool to avoid confusion and…
zhouwg Apr 8, 2025
83b0e4f
ggml-hexagon: make function ggmlhexagon_init_rpcmempool in ggml-hexag…
zhouwg Apr 8, 2025
3a34101
ggml-hexagon: fix potential resource leak in class hexagon_profiler
zhouwg Apr 8, 2025
98fdc28
ggml-hexagon: enable multi-threading feature on cDSP side
zhouwg Apr 8, 2025
880976f
ggml-hexagon: upgrade QNN SDK to v2.33.0.250327
zhouwg Apr 9, 2025
67551bb
ggml-hexagon: fix typo in ggml-hexagon.cpp
zhouwg Apr 9, 2025
9d43167
ggml-dsp: probe QuRT RTOS information in function ggmlop_dsp_open
zhouwg Apr 9, 2025
0b28da9
ggml-hexagon: setting enable_rpc_ion_mempool to 1 and make test-backe…
zhouwg Apr 10, 2025
ea970ca
ggml-hexagon: check whether user's specified htp arch is valid in CMa…
zhouwg Apr 10, 2025
f12593a
ggml-hexagon: sync with upstream
zhouwg Apr 11, 2025
828d465
ggml-hexagon: refine pinned-memory feature
zhouwg Apr 11, 2025
9839bd0
ggml-hexagon: refine build system in ggml-hexagon
zhouwg Apr 11, 2025
65c377a
ggml-hexagon: remove redundant code in struct ggml_backend_hexagon_bu…
zhouwg Apr 11, 2025
7ad26b6
ggml-hexagon: upgrade Android NDK to android-ndk-r28
zhouwg Apr 11, 2025
db15b6c
ggml-dsp: split ggml-dsp.c into multiple files and cleanup
zhouwg Apr 11, 2025
a37f1b5
ggml-dsp: refine ggml-dsp and make ggml-dsp more clear
zhouwg Apr 12, 2025
90b2dc0
ggml-hexagon: fix a minior issue in dev ops
zhouwg Apr 12, 2025
e9bfbce
ggml-hexagon: fix a build issue in CI
zhouwg Apr 12, 2025
4359824
ggml-dsp: cleanup code
zhouwg Apr 15, 2025
7bb2774
ggml-hexagon: sync with upstream
zhouwg Apr 15, 2025
0451d53
ggml-dsp: cleanup code
zhouwg Apr 16, 2025
da2545d
ggml-dsp:refine ggmlhexagon_dsp_add_f32
zhouwg Apr 16, 2025
80330d3
ggml-dsp: refine logic of thread_counts
zhouwg Apr 17, 2025
7f11fc1
ggml-hexagon: release v1.06 and ready for code review
zhouwg Apr 17, 2025
2285eb3
ggml-dsp: make GGML_OP_ADD more faster on cDSP side
zhouwg Apr 19, 2025
70206d7
ggml-hexagon: sync from project kantv(make ggml-hexagon backend can w…
zhouwg Apr 24, 2025
b79f396
sync with upstream llama.cpp and sync ggml-hexagon.cpp from project k…
zhouwg Apr 29, 2025
35bfc28
sync with upstream
zhouwg May 7, 2025
3ab7ddb
sync with upstream
zhouwg May 10, 2025
5bbcd23
ggml-hexagon: upgrade QNN SDK to v2.34.0.250424
zhouwg May 11, 2025
770061f
sync with upstream
zhouwg May 16, 2025
5a588d1
ggml-hexagon: sync from project kantv(fix a long-term issue which int…
zhouwg May 17, 2025
057bf1b
ggml-hexagon: sync with upstream llama.cpp
zhouwg May 23, 2025
700f039
ggml-hexagon: add set_hexagon_cfg(int new_hexagon_backend, int new_hw…
zhouwg Jun 3, 2025
0ef1e49
ggml-hexagon: sync with branch self-build
zhouwg Jun 19, 2025
1245c4e
ggml-hexagon:sycn with branch self-build
zhouwg Jun 23, 2025
2864ed9
project: sync with upstream(PR-14501:remove kompute backend)
zhouwg Jul 3, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -146,3 +146,5 @@ poetry.toml
# Local scripts
/run-vim.sh
/run-chat.sh

/prebuilts
15 changes: 15 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,20 @@ set(CMAKE_WARN_UNUSED_CLI YES)

set(CMAKE_EXPORT_COMPILE_COMMANDS ON)

if(CMAKE_SYSTEM_NAME STREQUAL "Android")
if(DEFINED HTP_ARCH_VERSION)
if (${HTP_ARCH_VERSION} STREQUAL "v75" OR ${HTP_ARCH_VERSION} STREQUAL "v79")
#works fine on Snapdragon 8Gen3&8Elite with 1.5x - 3x performance gains with the default ggml backend
set(OPT_FLAG " -O3 -march=armv8.7-a -mcpu=cortex-x1 -mtune=cortex-x1 -ffp-model=fast -fno-finite-math-only")
message("OPT_FLAG:${OPT_FLAG}")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")
endif()
endif()
endif()

if (NOT XCODE AND NOT MSVC AND NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release CACHE STRING "Build type" FORCE)
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Debug" "Release" "MinSizeRel" "RelWithDebInfo")
Expand Down Expand Up @@ -127,6 +141,7 @@ llama_option_depr(WARNING LLAMA_RPC GGML_RPC)
llama_option_depr(WARNING LLAMA_SYCL GGML_SYCL)
llama_option_depr(WARNING LLAMA_SYCL_F16 GGML_SYCL_F16)
llama_option_depr(WARNING LLAMA_CANN GGML_CANN)
llama_option_depr(WARNING LLAMA_HEXAGON GGML_HEXAGON)

if (NOT MSVC)
if (LLAMA_SANITIZE_THREAD)
Expand Down
2 changes: 2 additions & 0 deletions ggml/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,7 @@ option(GGML_OPENCL_EMBED_KERNELS "ggml: embed kernels"
option(GGML_OPENCL_USE_ADRENO_KERNELS "ggml: use optimized kernels for Adreno" ON)
set (GGML_OPENCL_TARGET_VERSION "300" CACHE STRING
"gmml: OpenCL API version to target")
option(GGML_HEXAGON "ggml: use HEXAGON" OFF)

# toolchain for vulkan-shaders-gen
set (GGML_VULKAN_SHADERS_GEN_TOOLCHAIN "" CACHE FILEPATH "ggml: toolchain file for vulkan-shaders-gen")
Expand Down Expand Up @@ -270,6 +271,7 @@ set(GGML_PUBLIC_HEADERS
include/ggml-rpc.h
include/ggml-sycl.h
include/ggml-vulkan.h
include/ggml-hexagon.h
include/gguf.h)

set_target_properties(ggml PROPERTIES PUBLIC_HEADER "${GGML_PUBLIC_HEADERS}")
Expand Down
48 changes: 48 additions & 0 deletions ggml/include/ggml-hexagon.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#pragma once

#include "ggml.h"
#include "ggml-backend.h"

#ifdef __cplusplus
extern "C" {
#endif

#define GGML_HEXAGON_MAX_DEVICES 4
#define GGML_HEXAGON_BACKEND_NAME "hexagon"

enum HEXAGONBackend {
HEXAGON_BACKEND_QNNCPU = 0,
HEXAGON_BACKEND_QNNGPU = 1,
HEXAGON_BACKEND_QNNNPU = 2,
HEXAGON_BACKEND_CDSP = 3,
HEXAGON_BACKEND_GGML = 4, //"fake" HEXAGON backend for compare performance between HEXAGON backend and ggml backend
};

//0: general approach through QNN:offload ggmlop to QNN(QNNCPU, QNNGPU, QNNNPU)
//1: special approach through QNN-SINGLEGRAPH:mapping entire ggml cgraph to a single QNN graph
//2: general approach through Hexagon cDSP:offload ggmlop to Hexagon cDSP directly
enum hwaccel_approach_type {
HWACCEL_QNN = 0,
HWACCEL_QNN_SINGLEGRAPH= 1,
HWACCEL_CDSP = 2,
};

GGML_BACKEND_API ggml_backend_t ggml_backend_hexagon_init(size_t dev_num, const char * qnn_lib_path);

GGML_BACKEND_API bool ggml_backend_is_hexagon(ggml_backend_t backend);

GGML_BACKEND_API int ggml_backend_hexagon_get_device_count(void);

GGML_BACKEND_API ggml_backend_reg_t ggml_backend_hexagon_reg(void);

GGML_BACKEND_API const char * ggml_backend_hexagon_get_devname(size_t dev_num);

GGML_BACKEND_API void ggml_backend_hexagon_set_cfg(int new_hexagon_backend, int new_hwaccel_approach);

GGML_BACKEND_API int ggml_backend_hexagon_get_mulmat_algotype(void);

GGML_BACKEND_API void ggml_backend_hexagon_set_mulmat_algotype(int new_mulmat_algotype);

#ifdef __cplusplus
}
#endif
1 change: 1 addition & 0 deletions ggml/src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -371,6 +371,7 @@ ggml_add_backend(RPC)
ggml_add_backend(SYCL)
ggml_add_backend(Vulkan)
ggml_add_backend(OpenCL)
ggml_add_backend(HEXAGON)

foreach (target ggml-base ggml)
target_include_directories(${target} PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/../include> $<INSTALL_INTERFACE:include>)
Expand Down
9 changes: 9 additions & 0 deletions ggml/src/ggml-backend-reg.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,10 @@
#include "ggml-cann.h"
#endif

#ifdef GGML_USE_HEXAGON
#include "ggml-hexagon.h"
#endif

// disable C++17 deprecation warning for std::codecvt_utf8
#if defined(__clang__)
# pragma clang diagnostic push
Expand Down Expand Up @@ -185,6 +189,9 @@ struct ggml_backend_registry {
#ifdef GGML_USE_RPC
register_backend(ggml_backend_rpc_reg());
#endif
#ifdef GGML_USE_HEXAGON
register_backend(ggml_backend_hexagon_reg());
#endif
#ifdef GGML_USE_CPU
register_backend(ggml_backend_cpu_reg());
#endif
Expand Down Expand Up @@ -568,12 +575,14 @@ void ggml_backend_load_all_from_path(const char * dir_path) {
ggml_backend_load_best("cann", silent, dir_path);
ggml_backend_load_best("cuda", silent, dir_path);
ggml_backend_load_best("hip", silent, dir_path);
ggml_backend_load_best("kompute", silent, dir_path);
ggml_backend_load_best("metal", silent, dir_path);
ggml_backend_load_best("rpc", silent, dir_path);
ggml_backend_load_best("sycl", silent, dir_path);
ggml_backend_load_best("vulkan", silent, dir_path);
ggml_backend_load_best("opencl", silent, dir_path);
ggml_backend_load_best("musa", silent, dir_path);
ggml_backend_load_best("hexagon", silent, dir_path);
ggml_backend_load_best("cpu", silent, dir_path);
// check the environment variable GGML_BACKEND_PATH to load an out-of-tree backend
const char * backend_path = std::getenv("GGML_BACKEND_PATH");
Expand Down
133 changes: 133 additions & 0 deletions ggml/src/ggml-hexagon/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
project(ggml-hexagon)
message(STATUS "Using HEXAGON backend")
message("CMAKE_SYSTEM_NAME : ${CMAKE_SYSTEM_NAME}")

set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

if(NOT DEFINED QNN_SDK_PATH)
message(FATAL_ERROR "QNN_SDK_PATH not defined")
endif()

if(NOT DEFINED HEXAGON_SDK_PATH)
message(FATAL_ERROR "HEXAGON_SDK_PATH not defined")
endif()

message("QNN_SDK_PATH : ${QNN_SDK_PATH}")
message("HEXAGON_SDK_PATH: ${HEXAGON_SDK_PATH}")
message("HTP_ARCH_VERSION: ${HTP_ARCH_VERSION}")

if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(DEBUG_FLAG "-DDEBUG -Wall")
message("Debug mode:${DEBUG_FLAG}")
else()
set(DEBUG_FLAG "-DNDEBUG -Wall")
#manually disable all verbose logs in ggml-hexagon/CMakeLists.txt to
#make compare NPU performance through llama-bench more clear
#set(DEBUG_FLAG "-DNDEBUG -Wall -DDISABLE_ALL_LOG")
message("Release mode:${DEBUG_FLAG}")
endif()

#v68 --- Snapdragon 888
#v69 --- Snapdragon 8 Gen1
#v73 --- Snapdragon 8 Gen2
#v75 --- Snapdragon 8 Gen3
#v79 --- Snapdragon 8 Elite
if(NOT DEFINED HTP_ARCH_VERSION)
message(FATAL_ERROR "HTP_ARCH_VERSION not defined, valid htp arch: v68,v69,v73,v75,v79")
endif()

#check whether user's specified htp arch is valid
set(CHECK_HTP_ARCH "WRONG")
foreach (feat v68 v69 v73 v75 v79)
if (${feat} STREQUAL ${HTP_ARCH_VERSION})
set(CHECK_HTP_ARCH "GOOD")
endif()
endforeach()
if (${CHECK_HTP_ARCH} STREQUAL "WRONG")
message(FATAL_ERROR "ggml-hexagon backend only support htp arch v68,v69,v73,v75,v79")
endif()

#check optimization flags
set(OPT_FLAG " ")
if (${HTP_ARCH_VERSION} STREQUAL "v75" OR ${HTP_ARCH_VERSION} STREQUAL "v79")
#works fine on Snapdragon 8Gen3&8Elite with 1.5x - 3x performance gains with the default ggml backend
set(OPT_FLAG " -O3 -march=armv8.7-a -mcpu=cortex-x1 -mtune=cortex-x1 -flto -D_GNU_SOURCE -fvectorize -ffp-model=fast -fno-finite-math-only")
endif()
message("OPT_FLAG:${OPT_FLAG}")

if(CMAKE_SYSTEM_NAME STREQUAL "Android")
find_library(LOG_LIB log)

add_library(cdsprpc
SHARED
IMPORTED)
set_target_properties(cdsprpc
PROPERTIES
IMPORTED_LOCATION
${HEXAGON_SDK_PATH}/ipc/fastrpc/remote/ship/android_aarch64/libcdsprpc.so)

set(QNN_LINK_LIBRARIES ${LOG_LIB} cdsprpc)
set(QNN_DEFAULT_LIB_SEARCH_PATH "/data/local/tmp/" CACHE STRING "customized library search path for QNN backend")

include_directories(${HEXAGON_SDK_PATH}/incs)
include_directories(${HEXAGON_SDK_PATH}/incs/stddef)
include_directories(${HEXAGON_SDK_PATH}/ipc/fastrpc/incs)
include_directories(${HEXAGON_SDK_PATH}/ipc/fastrpc/rpcmem/inc)
include_directories(${HEXAGON_SDK_PATH}/ipc/fastrpc/remote/ship/android_Debug_aarch64)
include_directories(${HEXAGON_SDK_PATH}/utils/examples)
include_directories(${HEXAGON_SDK_PATH}/ipc/fastrpc/rtld/ship/android_aarch64)
include_directories(${HEXAGON_SDK_PATH}/libs/atomic/inc)
include_directories(${HEXAGON_SDK_PATH}/libs/atomic/android_Debug_aarch64/ship)
include_directories(${CMAKE_SOURCE_DIR}/ggml/src/ggml-hexagon/)
include_directories(${CMAKE_SOURCE_DIR}/ggml/src/ggml-hexagon/kernels/)
elseif(CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(QNN_DEFAULT_LIB_SEARCH_PATH "C:\\" CACHE STRING "customized library search path for QNN backend")
else()
message(FATAL_ERROR "ggml-hexagon now only available on Android and Windows(Windows on ARM)")
endif()

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -DGGML_USE_HEXAGON ${DEBUG_FLAG} ${OPT_FLAG}")

file(GLOB HEXAGON_SOURCES "${CMAKE_CURRENT_LIST_DIR}/*.cpp" "${CMAKE_CURRENT_LIST_DIR}/kernels/stub.c")
ggml_add_backend_library(ggml-hexagon ${HEXAGON_SOURCES})

target_include_directories(ggml-hexagon PRIVATE ${QNN_SDK_PATH}/include/QNN ${HEXAGON_SDK_PATH} ${CMAKE_CURRENT_LIST_DIR})
target_link_libraries(ggml-hexagon PRIVATE ${QNN_LINK_LIBRARIES})

string(REGEX REPLACE "/$" "" QNN_DEFAULT_LIB_SEARCH_PATH "${QNN_DEFAULT_LIB_SEARCH_PATH}")
target_compile_definitions(ggml-hexagon PRIVATE QNN_DEFAULT_LIB_SEARCH_PATH="${QNN_DEFAULT_LIB_SEARCH_PATH}/")

#cross compiling source codes of hexagon kernels which running on cDSP side
function(ggml_hexagon_build_kernel KNAME)
message(STATUS "ggml_hexagon: build hexagon-kernel ${KNAME}")

add_custom_command(
TARGET ${PROJECT_NAME}
POST_BUILD
COMMAND echo "current working path:`pwd`\n"
COMMAND echo "${CMAKE_CURRENT_LIST_DIR}/kernels"
COMMAND make -C ${CMAKE_CURRENT_LIST_DIR}/kernels/ clean
COMMAND make -C ${CMAKE_CURRENT_LIST_DIR}/kernels/ HEXAGON_SDK_PATH=${HEXAGON_SDK_PATH} HTP_ARCH_VERSION=${HTP_ARCH_VERSION} DEBUG_FLAG=${DEBUG_FLAG}
COMMAND echo "current working path:`pwd`\n"
COMMAND ls -l ../../../bin/libggmldsp-skel.so
COMMENT "build hexagon-kernel"
)
endfunction()

function(ggml_hexagon_setup_cfg KNAME)
message(STATUS "ggml_hexagon: setup runtime configuration file ${KNAME}")
add_custom_command(
TARGET ${PROJECT_NAME}
POST_BUILD
COMMAND echo "current working path:`pwd`\n"
COMMAND /bin/cp -fv ../../../../../scripts/${KNAME} ../../../bin/
COMMENT "setup runtime configuration file"
)
endfunction()

ggml_hexagon_build_kernel("cdsp")
ggml_hexagon_setup_cfg("ggml-hexagon.cfg")
Loading
Loading