Saludos al foro Agradecer especialmente a Antonio Linares por la charla del pasado viernes 12-4-2024 a la que no pude asistir Afortunadamente tenemos a Cristobal Navarro para grabarlo todo y subirlo al canal https://www.youtube.com/watch?v=3O_8pxw1wHc En la charla se expuso muchísima inf...
Silvio, Master Nages' model, with few changes, works perfectly on Windows 7 32 bits with HARBOUR and XHARBOUR. Test it on Windows 10/11. Silvio, el modelo del Maestro Nages, con pocos cambios, funciona perfectamente en Windows 7 32 ...
Silvio, Master Nages' model, with few changes, works perfectly on Windows 7 32 bits with HARBOUR and XHARBOUR. Test it on Windows 10/11. Silvio, el modelo del Maestro Nages, con pocos cambios, funciona perfectamente en Windows 7 32 bits ...
... on the envelope, I never saw anything. Perhaps, if Mr. Rick had an image of how to print, it would help. Or maybe there is a specific printer model for this... Buenos días Maestro Otto. Estoy seguro de que tienes razón. Pero creo que es difícil ayudar al Sr. Rick, ya que nunca he visto nada ...
... leave samples of others unchanged. Calm down Enrico. When you are the OWNER of the forum, you can tell me what I can or cannot do. I made a model honoring you, I don't know why you were so sensitive, and I didn't even use offensive colors. Tranquilo Enrique. Cuando seas PROPIETARIO del foro ...
Simple code to use an AI model to build a dataset: generate.py import jsonimport torchfrom transformers import AutoModelForCausalLM, AutoTokenizer# Load the pre-trained modelmodel_name = "mlabonne/phixtral-4x2_8"tokenizer = ...
... chatgpt to do it again without including previous responses Once we have the dataset then we train tinyLlama and we get our own specialized IA model :-)
Dear Jimmy, You may be able to use any GGUF file (Artificial Intelligence open source model) using llama64.dll with Harbour and FWH. Please follow these steps to build llama64.dll as it takes full advantage of your computer capabilities: https://forums.fivetechsupport.com/viewtopic.php?p=266316&sid=79f17f8ebda5b77d799c73fb2a39838c#p266316 ...
Used code: https://github.com/FiveTechSoft/tinyMedical Trained model: https://huggingface.co/fivetech/tinyMedical-GGUF/tree/main used engine to run the resulting GGUF file: https://github.com/ggerganov/llama.cpp You can also use ...
hi, the "Problem" is to train (fine tune) a base model for own Data as it need much PC-Power Question : is it possible to "rent" PC-Power to train own Model :?: Dear Jimmy, You can use Google Colab with T4. You have a certain amount ...
Dear Leandro, The first step is to create a dataset with questions and answers to train (fine tune) a base model such as Microsoft phi-2, TinyLlama, etc. Once trained, a GGUF file is generated, which can be used with FWH using the llama64.dll. This is the free and private way. ...
Locally using a fine tuned model with quantization: !pip install accelerate==0.25.0!pip install bitsandbytes==0.41.1!pip install datasets==2.14.6!pip install peft==0.6.2!pip install transformers==4.36.2!pip install torch==2.1.0!pip install ...