Page 1 of 1

more GGUF Model

PostPosted: Sun Jan 28, 2024 9:24 pm
by Jimmy
hi,

under https://gpt4all.io/index.html i found differnet *.GGUF Model
not sure if we can use it with Fivewin

Re: more GGUF Model

PostPosted: Mon Jan 29, 2024 6:14 am
by Antonio Linares
Dear Jimmy,

You may be able to use any GGUF file (Artificial Intelligence open source model) using llama64.dll with Harbour and FWH.

Please follow these steps to build llama64.dll as it takes full advantage of your computer capabilities:
viewtopic.php?p=266316&sid=79f17f8ebda5b77d799c73fb2a39838c#p266316

Here I use a Xeon CPU (iMac) and nvidia GPU (RTX 3060, eGPU). If I build it locally it will not be able to properly execute on a lower CPU/GPU.
Thats why you have to build it yourself on your own CPU and GPU. AI gets it best speed using nvidia GPU. Also enough RAM is required for very large GGUF files.

We do recommend "Microsoft/phi-2" and also "TinyLlama/TinyLlama-1.1B-Chat-v1.0" for low hardware requirements. You can find most GGUFs at https://huggingface.co/
Recently an improved "brittlewis12/phi-2-orange-GGUF" is available which is better than "Microsoft/phi-2".

We have recently published a repo at https://github.com/FiveTechSoft/tinyMedical where you have the code to fine tune TinyLlama with your own data, in this case we have used a medical dataset. We encourage you to start building your own GGUFs. Full source code is available at the repo. TinyLLama is great for starting on AI training :-)