harmony 鸿蒙Application Data Vectorization
Application Data Vectorization
When to Use
Application data vectorization leverages embedding models to convert multi-modal data such as unstructured text and images into semantic vectors. In scenarios like intelligent retrieval and Retrieval-Augmented Generation (RAG), embedding models act as a bridge, mapping discrete textual and visual data into a unified vector space for cross-modal data retrieval. Vectorization applies to the following scenarios:
- Efficient retrieval: enables rapid recall of document fragments that are most relevant to query terms from a vector database by calculating vector similarities. Compared with traditional inverted indexing, efficient retrieval can identify implicit semantic associations, enhancing the contextual relevance of retrieved content.
- RAG: a leading approach to addressing the hallucination problem in large language models (LLMs). A vector knowledge base plays a crucial role in RAG technology. By retrieving precise context from the knowledge base (Top-K relevant vectors corresponding to text) and using it as input prompts for the generation model, RAG significantly reduces the occurrence of hallucinations.
Basic Concepts
Multi-Modal Embedding Model
Embedding models are used to implement application data vectorization. The system supports multimodal embedding models, which can map different data modalities, such as text and images, into a unified vector space. These models support both single-modal semantic representation (text-to-text and image-to-image retrieval) and cross-modal capabilities (text-to-image and image-to-text retrieval).
Text Segmentation
To address length limitations when textual data is vectorized, you can use the APIs provided by the ArkData Intelligence Platform (AIP) to split the input text into smaller sections. This approach ensures efficient and effective data vectorization.
Working Principles
Application data vectorization involves converting raw application data into vector formats and storing them in a vector database (store).
Constraints
- The model can process up to 512 characters of text per inference, supporting both Chinese and English.
- The model can handle images below 20 MB in size in a single inference.
Available APIs
The following table lists the APIs related to application data vectorization. For more APIs and their usage, see ArkData Intelligence Platform.
API | Description |
---|---|
getTextEmbeddingModel(config: ModelConfig): Promise<TextEmbedding> | Obtains a text embedding model. |
loadModel(): Promise<void> | Loads this text embedding model. |
splitText(text: string, config: SplitConfig): Promise<Array<string>> | Splits text. |
getEmbedding(text: string): Promise<Array<number>> | Obtains the embedding vector of the given text. |
getEmbedding(batchTexts: Array<string>): Promise<Array<Array<number>>> | Obtains the embedding vector of a given batch of text. |
releaseModel(): Promise<void> | Releases this text embedding model. |
getImageEmbeddingModel(config: ModelConfig): Promise<ImageEmbedding> | Obtains an image embedding model. |
loadModel(): Promise<void> | Loads this image embedding model. |
getEmbedding(image: Image): Promise<Array<number>> | Obtains the embedding vector of the given image. |
releaseModel(): Promise<void> | Releases this image embedding model. |
How to Develop
- Import the intelligence module.
import { intelligence } from '@kit.ArkData';
- Obtain a text embedding model.
import { BusinessError } from '@kit.BasicServicesKit';
let textConfig:intelligence.ModelConfig = {
version:intelligence.ModelVersion.BASIC_MODEL,
isNpuAvailable:false,
cachePath:"/data"
}
let textEmbedding:intelligence.TextEmbedding;
intelligence.getTextEmbeddingModel(textConfig)
.then((data:intelligence.TextEmbedding) => {
console.info("Succeeded in getting TextModel");
textEmbedding = data;
})
.catch((err:BusinessError) => {
console.error("Failed to get TextModel and code is " + err.code);
})
- Load this embedding model.
textEmbedding.loadModel()
.then(() => {
console.info("Succeeded in loading Model");
})
.catch((err:BusinessError) => {
console.error("Failed to load Model and code is " + err.code);
})
- Split text. If the data length exceeds the limit, call splitText() to split the data into smaller text blocks and then vectorize them.
let splitConfig:intelligence.SplitConfig = {
size:10,
overlapRatio:0.1
}
let splitText = 'text';
intelligence.splitText(splitText, splitConfig)
.then((data:Array<string>) => {
console.info("Succeeded in splitting Text");
})
.catch((err:BusinessError) => {
console.error("Failed to split Text and code is " + err.code);
})
- Obtain the embedding vector of the given text. The given text can be a single piece of text or a collection of multiple text entries.
let text = 'text';
textEmbedding.getEmbedding(text)
.then((data:Array<number>) => {
console.info("Succeeded in getting Embedding");
})
.catch((err:BusinessError) => {
console.error("Failed to get Embedding and code is " + err.code);
})
let batchTexts = ['text1','text2'];
textEmbedding.getEmbedding(batchTexts)
.then((data:Array<Array<number>>) => {
console.info("Succeeded in getting Embedding");
})
.catch((err:BusinessError) => {
console.error("Failed to get Embedding and code is " + err.code);
})
- Release this text embedding model.
textEmbedding.releaseModel()
.then(() => {
console.info("Succeeded in releasing Model");
})
.catch((err:BusinessError) => {
console.error("Failed to release Model and code is " + err.code);
})
- Obtain an image embedding model.
let imageConfig:intelligence.ModelConfig = {
version:intelligence.ModelVersion.BASIC_MODEL,
isNpuAvailable:false,
cachePath:"/data"
}
let imageEmbedding:intelligence.ImageEmbedding;
intelligence.getImageEmbeddingModel(imageConfig)
.then((data:intelligence.ImageEmbedding) => {
console.info("Succeeded in getting ImageModel");
imageEmbedding = data;
})
.catch((err:BusinessError) => {
console.error("Failed to get ImageModel and code is " + err.code);
})
- Load this image embedding model.
imageEmbedding.loadModel()
.then(() => {
console.info("Succeeded in loading Model");
})
.catch((err:BusinessError) => {
console.error("Failed to load Model and code is " + err.code);
})
Obtain the embedding vector of the given image.
let image = "file://<packageName>/data/storage/el2/base/haps/entry/files/xxx.jpg"; imageEmbedding.getEmbedding(image) .then((data:Array<number>) => { console.info("Succeeded in getting Embedding"); }) .catch((err:BusinessError) => { console.error("Failed to get Embedding and code is " + err.code); })
Release this image embedding model.
imageEmbedding.releaseModel() .then(() => { console.info("Succeeded in releasing Model"); }) .catch((err:BusinessError) => { console.error("Failed to release Model and code is " + err.code); })
你可能感兴趣的鸿蒙文章
harmony 鸿蒙ArkData (Ark Data Management)
harmony 鸿蒙Access Control by Device and Data Level
harmony 鸿蒙Application Data Persistence Overview
harmony 鸿蒙Database Backup and Restore
harmony 鸿蒙Introduction to ArkData
harmony 鸿蒙Persisting Graph Store Data (for System Applications Only)
- 所属分类: 后端技术
- 本文标签:
热门推荐
-
2、 - 优质文章
-
3、 gate.io
-
8、 golang
-
9、 openharmony
-
10、 Vue中input框自动聚焦