Download the App Reader Counter

Custom Proxies for Janitor AI

A Simple Guide to Using Deepseek & Other LLMs

March 15th, 2026

⚠️ Important Update: Free Models on Openrouter

Lately, most of the decent free models on Openrouter have been removed. Users are encouraged to explore the remaining options or consider alternative platforms for their LLM needs OR continue using JLLM (Janitor's LLM).

Under the Models section, I will curate a list of free models that, despite being unavailable, should return within the coming months. Along with the models, a collection of prompts will soon be added to the guide to help you get the best results out of your limited free tier models and the JLLM. In addition, I will also curate a list of paid tier models available on both Openrouter and Chutes that are worth the price and don't return frequent errors. In the meantime, I recommend using JLLM (Janitor's LLM) which is the default model for Janitor AI and is still a solid option for roleplaying. It can handle NSFW content and doesn't have any major limitations. A good prompt and generation settings can deliver great content with the JLLM. You can also visit Janitor's official Discord server to find and share prompts that work well with the JLLM.

Please Note: As of this day, 3.15.2026, this is a current and up-to-date guide. I do not anticipate any changes that could potentially alter the required steps and/or render this guide inaccurate or irrelevant. I will keep this guide as up-to-date as possible.

⚙️ Getting Started

Because of the recent changes regarding Deepseek, we will be using TNG: Deepseek R1T2 Chimera (free) as our example model. Currently, as of December 2025, the only other free model is Deepseek's R1 0528 (free) which is provided by ModelRun and has been throwing several 400 errors due to the model's token limit being exceeded after so many messages. This model is one of the few free and reliable Deepseek related options available. Follow these steps to set up your custom proxy for Janitor AI using OpenRouter:

  1. Type openrouter.ai in your browser or click here and register for an account.
  2. Click on settings (upper right corner) THEN scroll down to Default Model and select TNG: DeepSeek R1T2 Chimera (free) or your preferred model.
  3. Click the Privacy tab (left menu) and enable Model Training. If you do not enable this, you will be thrown an error when generating a response.
  4. Open a new tab and click on the API Keys tab (left menu) and create a new API key. SAVE the key because once you copy it, you will not be able to see it again.
    • If you lose your key, you can always create a new one.
  5. Return to Janitor.ai and find a proxy compatible bot.
    • Proxy compatible bots will show "proxy allowed" text above the persona selector/start chat feature. Compatible bots usually have visible character definitions.
  6. After choosing a bot, open the API Settings from the menu (top right corner) and choose Proxy.
  7. Under the Proxy Model settings, choose 'Add Configuration' and choose your Config Name (anything you want it to be).
    • You can create multiple configurations which is great for experimenting without changing your current configuration.
  8. Under 'Model Name' paste tngtech/deepseek-r1t2-chimera:free in the box (MUST be all lowercase).
    • When you determine the model you want to use, there is a clipboard icon you can press to copy your model name.
    • For example, mistralai/mistral-medium-3.1 would be the model name for Mistral: Mistral Medium 3.1
  9. Under Other API/Proxy URL type/paste this EXACT link: https://openrouter.ai/api/v1/chat/completions
    • This WILL NOT change regardless of the model you use. Only if you use a different API provider.
  10. Finally, under API Key, paste the API key you generated and saved in step 4; Otherwise create and paste a NEW API key.
    • Note that while API Keys are listed as 'OPTIONAL,' failure to provide an API key will only throw you errors.
  11. If you do not have any custom prompts you would like to add, click Save Settings. When/if the pop up asks if you would like to set generation settings to openai's default, click yes.
  12. Open Generation Settings from the in chat menu (top right corner) and set your temperature and tokens to your preference.
  13. Close all Janitor.ai tabs and reopen them (or simply refresh). Now you can start chatting!

⚙️ Understanding Models & Settings

Model Selection

Different models will yield different results. You may prefer certain models based on their ability to produce responses that match your style and deliver an amazing roleplay experience. Janitor.ai is known for its NSFW content, and some models won't support NSFW without jailbreaking and custom prompts—even then, decent responses aren't guaranteed. From personal experience, Qwen and Deepseek are solid NSFW-friendly models.

Key Settings Explained

Temperature: Controls the randomness and creativity of responses. Where you set your temperature determines how logical or creative your response will be.

  • Low temp (0.1-0.5) = More logical, focused, and consistent responses
  • High temp (0.8-1.5+) = More creative, varied, and unpredictable responses

Tokens & Context Window: Tokens are the fundamental units (words, sub-words, characters) LLMs process, while the context window is the limit on how many tokens (input prompt + generated output) the model can handle at once.

Recommended Settings

I normally set tokens at 0 for Deepseek and preferred 1000 (max) when using Qwen. Each time you change your model, you'll be reverted to JLLM's default settings (temperature: 1.1, tokens: 260). You can use defaults, but I recommend starting with a temperature of at least 0.1 and adjusting for each new model. Access these settings in chat under Generation Settings.

💡 Pro Tip: Experiment with different models and settings to find what works best for your roleplay style. Remember to adjust settings when switching models!

🌟 Free Models

Note:

As of December 2025, all free tier Deepseek models have been removed, as well as the TNG Deepseek model more recently. Because of this, I will be curating a list of free tier Openrouter models that may or may not return for you to test, with avaliablity indicated via red/green. In the meantime, I recommend trying the available listed models when marked available or JLLM (Janitor's default LLM). You can visit the future Chutes and JLLM tabs for more options on paid tier models and getitng the most out of JLLM until good free models become avaliable.