cuda Quick bash cuda-toolkit drivers for 3060ti, 3080, 4090, 5090 in Ubuntu / Debian This quick script is for you to get your nvidia drivers up quickly with cuda toolkit for getting your LLM's to work!
maya-1 Easy LLM: Maya 1 Text-to-Speech Hits it out of the Park on an Easy-to-Use LLM. We do a quick review and are very positively surprised at how quickly this LLM will generate .wav files in the tone and incantation that you describe to it.
higgs-audio higgs-audio Part 1: Roll Your Own Audio Books. Run This LLM on a 1660ti 6 GB Video Card on a Laptop - Actually... no. In this guide we download, run and study the higgs-audio LLM, trying on a 1660ti Laptop. We eventually get it to work. But it's slow! We then modifiy the generation.py to run on dual 3060ti on a R9.
stability-ai Automatic Ad Generation Using Stability-AI Try 2 Automatic Ad Generation Using Stability-AI Try 2
image generation 20+ LLM's that Generate Images We review and show installation instructions for the top 20+ LLMs at huggingface.com
LLM LLM Image Generation on Dual 3060ti Video Cards Using stability_ai. With Stability_ai we are able to generate beautiful images at about 18 seconds / shot using 2 -3060ti cards in a balanced load.
Audio Audio LLM audio-flamingo-3-hf Text-to-Speech / Speech-to-Text Audio LLM audio-flamingo-3-hf Text-to-Speech / Speech-to-Text
TensorTrade TensorTrade - LLM Reinforcement Learning Model for Algorithmic Training for the Stock Market. TensorTrade - LLM Reinforcement Learning Model for Algorithmic Training for the Stock Market.
pufferfish PufferLib - Reinforcement Learning at 1 Million Steps / Second. PufferLib - Reinforcement Learning at 1 Million Steps / Second.
Llama LM Studio 0.3.30 (Build 1) Awesome! Run LLM's At Your House Without Being A Coder. A Basic Review LM Studio 0.3.30 (Build 1) Awesome! Run LLM's At Your House Without Being A Coder. A Basic Review
GGUF 3-bit Unsloth Dynamic GGUF Outperforms models like Claude-4-Opus (Thinking) with a score of 75.6% on the Aider Polyglot Benchmark 3-bit Unsloth Dynamic GGUF Outperforms models like Claude-4-Opus (thinking) with a score of 75.6% on the Aider Polyglot benchmark
3060ti Running the LLM Ring-Mini 2.0 16B on Ryzen 9 w/2 3060ti Nvidia GPUs. Running the Ring-Mini 2.0 16B on Ryzen 9 w/2 3060ti Nvidia GPUs.
3060ti Speed Comparisons GPU/CPU vs CPU only. Running Magistral-24B on a Budget. Speed Comparisons GPU/CPU vs CPU only. Running Magistral-24B on a budget.
LLM LLM Workups: Grok 4: Tower+ 9B from Unbabel. Penny-Wise Translators 11 token/per/second on a bottom-end 3060ti GPU in 4-bit mode. LLM Workups: Grok 4: Tower+ 9B from Unbabel. Penny-Wise Translators.
LLM LLM Workups: Running Falcon3-7b on a Minimal 3060ti Ryzen 9. LLM Workups: Running Falcon3-7b on a Minimal 3060ti Ryzen 9.
AI The Great AI Explosion (Part 4 - Running a 3b Coding Parameter Model from HuggingFace) In this example we run a small 3b 12 language coding model on modest GPU/CPU parts namely a 3060ti on a Ryzen R9. It works remarkably well!
AI The Great AI Explosion - Transformative Development (Part 2 - Levels of Coding Automation) In this article we go over the current leader boards that track LLM's, and METR - which is tracking the current progress of LLM's in replacing software developers.
AI The Great AI Explosion - The Migration to AI and Agentic Systems (Part 1 - Introductions) We go over the explosion of LLM (Large Language Models) and some examples of their current capabilities.