PreemoCan We Prompt an LLM to Uncover its Dreams of Electric Sheep?by Lucia Mocz, Ph.D.14 min read·Jun 7, 2023----
PreemoPerformance bottlenecks in deploying LLMs—a primer for ML researchersby Lucia Mocz, Ph.D.11 min read·May 10, 2023----
PreemoSqueeze more out of your GPU for LLM inference—a tutorial on Accelerate & DeepSpeedby Beite “Jupiter” Zhu11 min read·Apr 22, 2023--1--1
PreemoFine-tuning a model to speak English and ChineseAt Preemo, we’ve created a model that understands and produces both English and Chinese — by using an efficient, faster form of…6 min read·Apr 19, 2023----
PreemoThree traits of a task you can automateWhere will automation plug in? 1 of 3 in our Coding automation series.4 min read·Apr 10, 2023--1--1
PreemoThree ways to think about coding automationAI/ML tech is moving fast. Whether you’re an engineer, CTO, or somewhere in between, you’re likely wondering how to prepare for today and…1 min read·Apr 10, 2023----