Best AI coding plan alternative to Claude and ChatGPT

With the lowering usage limit in Claude, I am thinking of jumping ship to Chinese AI, since the benchmark is already very near compared to Sonnet or Haiku 4.5 , but for a fraction of the price. I am not worried about where is my data ending up through, I am focused on performance and usage limit. I mostly use it for coding and research.

However, I am currently deciding on which to use, and would love any recommendations from anyone that are using any or many of these AI,

- GLM Coding Plan (Z AI): $18/month Lite Plan - BytePlus: $10 ModelArk Coding Plan - Kimi AI: $19/month Moderato Coding Plan - MiniMax: $20 Plus Standard Plan

I would like to ask, is the performance good? Is it worth the value? And how is the usage limit? Also, if anyone have any good recommendation on AI plan that is only in Chinese language, I don’t mind too, as I can understand Chinese.

15 points | by Jsttan 1 day ago

9 comments

  • aspectrr 1 hour ago
    I like the GLM coding plan before they raised their prices, now their rate limits are more strict as they are compute constrained. It is still a good deal for 1/3 the price of Claude for the same quality.
  • irthomasthomas 1 day ago
    I like chutes. I think I get about 5K prompts per day for $20/m, though they may have stricter limits for new customers.

    This gives you practically unlimited usage of frontier models like kimi, deepseek, glm. Their models are always fullsize, never quantised except where the lab themselves provides an 4bit or 8bit model. You can see from the model config exactly which hf model it pulls and the serving co figuration used.

    Prompts are encrypted using Trusted Execution Environment (TEE). So neither a model host or neighbour can view your prompts. That's as close as you can get to local level privacy in the cloud.

    • comment0r 23 hours ago
      I tried looking into Chutes just now. Seems like there is no easy way to just pay & start using it with OpenCode or Claude Code, right? Their docs don’t seem to mention it. Do I really have to execute code with their API in order to use the models?
  • JSR_FDED 1 day ago
    I get Kimi through OpenCode Zen (kind of like openrouter for the OpenCode harness), periodically top up $20 and laugh every time I see my balance go down by 3 cents for something I would have happily paid someone $30.
  • serf 1 day ago
    nous portal or openrouter with a harness that uses intelligent multi provider requests,a local memory system, and pre-sub context compaction on input. if you do similar stuff often your token usage will drop after awhile of using a memory subsystem like hindsight or honcho quite a bit, and even more if you're using your harness to build relevant skills for the repeated tasks.
    • mistercheese 1 day ago
      Do you have a harness recommendation? Sounds like maybe you’re into Hermes?
  • fatbrowndog 1 day ago
    not good. I use DeepSeek's plan, Kimi AI, OpenRouter and it seemly consumes more tokens, than Claude's.

    I consume Claude ~30% per day in of, 1 week, Max,x20. Equivalent in Kimi Ai, is I consume 60% in one day, in one week.

    DeepSeek/Latest, 95% discount, with cache, I rack up ~$60/day before I stopped.

    I don't know how Claude compute their daily limits, it seems much cheaper.

    • Jsttan 1 day ago
      Which DeepSeek plan did you use? I been trying to find a DeepSeek for a while but with no success. I tried to use Claude $20 plan before, token burn like it is air, would be quite hard to believe anything else would burn so fast?
      • fatbrowndog 1 day ago
        I'm using the deepseek-v4-pro model is currently offered at a 75% discount. My bad it's 75% discount, via OpenRouter.

        I use the Claude-Max-20 ($200) plan. I manage to max it out 2 weeks. Planning to move to maybe multiple accounts.

        I use C++ and Claude for big code-base.

    • mirmor23 1 day ago
      [dead]
  • sidcool 1 day ago
    Antigravity?
  • screenstop 1 day ago
    [flagged]
  • volume_tech 1 day ago
    [flagged]
  • abhishekhsingh 1 day ago
    [dead]