Discussion about this post

User's avatar
Shivam Kumar's avatar

I have implemented turboquant research paper you can run massive context length LLM without high end gpu machine

https://substack.com/@shivamkumar337570/note/c-236283549?r=bqt8b

Rüzgar's avatar

run powerful ai on 4gb vram? how powerful? powerful enough to tell me a kid story? 😅😅

No posts

Ready for more?