How to Run gpt-oss (ChatGPT) Locally on Mac with LM Studio
Running a Large Language Model (LLM) like ChatGPT’s gpt-oss locally can have multiple benefits, including providing offline access to an AI language model tool like GPT. If you’re a fan of LLMs, AI tools, and ChatGPT, you might be interested in running a local instance of gpt-oss, but even if you’re fairly new to AI tools it could be intriguing to you to experiment with. Additionally, if you’re one of the ChatGPT users who are not thrilled about GPT-5 and want to revert back to GPT-4, running gpt-oss on your Mac provides another avenue to do that.
We’re going to walk through a really simple way to get gpt-oss, ChatGPT’s open model, running locally on your Mac. (And yes you can run gpt-oss on Windows or Linux too using basically the same methods outlined below, but we’re obviously focusing on MacOS here).
There are two versions of gpt-oss available, we’re going to focus on the lighter gpt-oss-20b model since it only takes up 16GB of storage (compared to 120GB+ for the gpt-oss-120b model). This will perform best on a more powerful Mac with a good GPU, but the gpt-oss-20b model runs pretty well and is more than usable in our testing on multiple different M-series Apple Silicon Macs. Though gpt-oss runs offline and has no internet access, you still need to have internet access initially to download the data files.
How to Run gpt-oss Locally & Offline on Mac
You can run gpt-oss (ChatGPT) locally and offline on Mac easily with the free LM Studio app, here’s how to get this setup quickly:
- Download LM Studio free from https://lmstudio.ai/download
- Launch LM Studio and choose “Power User”
- At the next screen make sure gpt-oss is selected and then click “Download gpt-oss” to start downloading the necessary files
- When the download has finished, choose “Start a New Chat”
- At the top of the screen, click in the titlebar
- Select “openai/gpt-oss” as your model, which will load the model for you to interact with
- When gpt-oss-20b has finished loading, you’re ready to interact with gpt-oss just like any other instance of ChatGPT, chatbot, or LLM
Enjoy your personal offline local GPT experience! You can use this just like any other LLM or chatbot, and it’s able to answer questions, perform equations and math, generate language and things like letters or reports, analyze data, write code, and everything else you’d expect from a powerful AI tool.
Because gpt-oss is running offline, it will not be able to update with new information or request new data from the web, but it has a huge data set to work with even in the 20b version. If you want to have even more capability, but slower performance on most consumer Macs, you can use the 120b version if you have the disk space to accomodate it.
This is obviously covering running gpt-oss locally, which is based on GPT 4, but there are many other models you can run offline, including llama, and the uncensored llama, DeepSeek, and many others.
Whether you love or hate AI tools, they’re here to stay, and are increasingly powerful and ubiquitous. That you can run models offline is of significant interest to many users who are privacy-centric, or who just want to experiment with an LLM without all the massive data sharing and training that goes on with any online internet-connected AI tool.
If you’re super privacy conscious and focused, and you want to guarantee there’s no internet access to the LLM as you have your super top secret discussions and interactions with an LLM, you could even download and install gpt-oss in a virtual machine first and then take that virtual machine completely offline, guaranteeing there’s no outside connection or access. People do that for many reasons, including using LLM’s for very personal reasons. It’s all up to you, experiment and explore.
Check out more AI articles, or ChatGPT specific articles, if you’re into AI and LLM’s!