Dear Jimmy,
You may be able to use any GGUF file (Artificial Intelligence open source model) using llama64.dll with Harbour and FWH.
Please follow these steps to build llama64.dll as it takes full advantage of your computer capabilities:
viewtopic.php?p=266316&sid=79f17f8ebda5b77d799c73fb2a39838c#p266316Here I use a Xeon CPU (iMac) and nvidia GPU (RTX 3060, eGPU). If I build it locally it will not be able to properly execute on a lower CPU/GPU.
Thats why you have to build it yourself on your own CPU and GPU. AI gets it best speed using nvidia GPU. Also enough RAM is required for very large GGUF files.
We do recommend "Microsoft/phi-2" and also "TinyLlama/TinyLlama-1.1B-Chat-v1.0" for low hardware requirements. You can find most GGUFs at
https://huggingface.co/Recently an improved "brittlewis12/phi-2-orange-GGUF" is available which is better than "Microsoft/phi-2".
We have recently published a repo at
https://github.com/FiveTechSoft/tinyMedical where you have the code to fine tune TinyLlama with your own data, in this case we have used a medical dataset. We encourage you to start building your own GGUFs. Full source code is available at the repo. TinyLLama is great for starting on AI training