more GGUF Model

more GGUF Model

Postby Jimmy » Sun Jan 28, 2024 9:24 pm

hi,

under https://gpt4all.io/index.html i found differnet *.GGUF Model
not sure if we can use it with Fivewin
greeting,
Jimmy
User avatar
Jimmy
 
Posts: 1733
Joined: Thu Sep 05, 2019 5:32 am
Location: Hamburg, Germany

Re: more GGUF Model

Postby Antonio Linares » Mon Jan 29, 2024 6:14 am

Dear Jimmy,

You may be able to use any GGUF file (Artificial Intelligence open source model) using llama64.dll with Harbour and FWH.

Please follow these steps to build llama64.dll as it takes full advantage of your computer capabilities:
viewtopic.php?p=266316&sid=79f17f8ebda5b77d799c73fb2a39838c#p266316

Here I use a Xeon CPU (iMac) and nvidia GPU (RTX 3060, eGPU). If I build it locally it will not be able to properly execute on a lower CPU/GPU.
Thats why you have to build it yourself on your own CPU and GPU. AI gets it best speed using nvidia GPU. Also enough RAM is required for very large GGUF files.

We do recommend "Microsoft/phi-2" and also "TinyLlama/TinyLlama-1.1B-Chat-v1.0" for low hardware requirements. You can find most GGUFs at https://huggingface.co/
Recently an improved "brittlewis12/phi-2-orange-GGUF" is available which is better than "Microsoft/phi-2".

We have recently published a repo at https://github.com/FiveTechSoft/tinyMedical where you have the code to fine tune TinyLlama with your own data, in this case we have used a medical dataset. We encourage you to start building your own GGUFs. Full source code is available at the repo. TinyLLama is great for starting on AI training :-)
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42127
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain


Return to Artificial Intelligence

Who is online

Users browsing this forum: No registered users and 3 guests