[1/7] LobeChat Cloud (hlobechat.com has been in public beta for over three months and we’ve reached our second 🏁 revenue x.com/arvin17x/statu…share the specific numbers so as not to sound like I’m selling anxiety 🤣) so let’s talk about our practical experience over the past three months! Let’s go~ 🚀
[2/7] Let me first talk about the product. Over the past three months, my strongest feeling is that we should develop more new features! Each major feature can bring in some new user groups: the new knowledge base released in August and the Artifacts released in September both brought in a large wave of incremental users. However, since there were no new features in October and we only did subscription-related optimizations, and there was relatively little promotion, the growth rate has slowed down significantly. Therefore, we will strive to increase the iteration rhythm to one major feature per month so that everyone can intuitively feel the progress of the product. This month's feature will be the "branch dialogue" that many people are looking forward to ~ (I have wanted it for a long time 😆)
[3/7] During this period, I believe one of our biggest strides forward has been the comprehensive enhancement of our API Gateway capabilities. Leveraging the top-tier open source project LiteLLM, we've now significantly enhanced the stability of our AI APIs. LiteLLM not only supports multi-channel load balancing and on-demand control of traffic switching, but also seamlessly supports retry and fallback upon failure. 👍 For example, using Claude 3.5 sonnet, we currently use two tier 4 official direct API keys for load balancing. If insufficient load is encountered, we automatically fallback to the official AiHubMix channel. At present, as long as Claude official does not crash, we can ensure that the stability of Claude's model channel can reach 100% 😎 (and in order to solve the problem that the official API may crash, we have recently contacted GCP officials to open an account, ready to serve as the backup channel for Anthropic's official API in the future 😏) At the same time, we also spent some time last month to build a Kuma, equipped with full AI model monitoring, and performed heartbeat monitoring of real requests for each model. Since most of our channels are directly connected to the official API, we can see through the heartbeat test that the average request delay is as low as about 1s ⚡️. The conversation experience of end users is effectively guaranteed😆
[4/7] Next, let's talk about product costs. Unlike many internet entrepreneurs, we're actually quite cost-conscious. After all, as a small team, we didn't have much investment to burn, so we did a lot of cost structure calculations early on. Let's assume we added 100,000 new users. How much traffic, computing costs, authentication costs, and storage costs would be consumed? How much would that translate to on platforms like Vercel, Neon, Clerk, and Cloudflare? How much would we multiply that price to create our profit margin, and ultimately, our sales price. This way, we can be confident that if our product traffic suddenly explodes tenfold, our existing infrastructure will still be able to handle that traffic and increase our profits tenfold.
[5/7] Take Clerk as an example. After running it on Cloud, we discovered that Clerk is practically a best practice for SaaS Auth for overseas deployments, and its billing is not as exorbitant as many people think. Many people think Clerk's $0.02 MAU is ridiculously expensive. If you have 20,000 MAUs, you'd have to pay $100 a month. 100,000 MAUs, you'd have to pay $2,000 a month. So, you're essentially working for Clerk? But that's not the case at all. Clerk's actual billable MAUs are calculated based on a "second login." Most users might just visit your product for the first time and then leave, never returning. These users are not billed by Clerk. Only truly active users are counted as MAUs. For example, on Cloud, we currently have 24,000 registered users, but Clerk only bills us for around 3,000 MAUs. This means that to truly reach Clerk's target of 10,000 MAUs, our total user base would likely need to reach 80,000. At our current paying rate, tripling our user base would be more than enough to generate stable, profitable cash flow. Furthermore, with Clerk's cost model, we naturally capture key data points in the user behavior funnel (registration -> active -> paid). Using self-built auth for this funnel data would likely incur additional costs. As for the cost breakdown of other foundational services (Neon, Vercel, etc.), I plan to open a separate thread to discuss this later, so stay tuned!
[6/7] Finally, let's talk about payments. We encountered our first risk control issue in August. Two payment orders were disputed. Later, we found out that someone had stolen credit cards. So far, there have been three such cases. The payment environment abroad is really not as safe as in China. These three disputes involving stolen credit cards have caused us to lose nearly $100+ 🥲. Therefore, it is strongly recommended that everyone turn on Stripe's Radar, otherwise there is a high probability of encountering problems such as stolen payments and disputes. When Stripe prompts you that you need to review certain payment orders, it is recommended to send an email inquiry to the account as soon as possible. If there is no response within a certain period of time, directly initiate a refund and mark it as fraud. This can greatly reduce the possibility of subsequent disputes. In addition, taxation has begun to become an issue that we need to pay attention to. We are currently registering one by one (UK, EU, etc.), and will share with everyone after we have complete experience~
[7/7] So, that's all the new insights and learnings I've gained over the past three months. lobechat.coms smooth as we'd hoped, but thankfully, we've made it through it. Finally, we welcome everyone to try out LobeChat, whether it's the open source version or the Cloud version (https://t.co/z4k5TITVKc). We welcome criticism and suggestions and strive to make it even better. 💪 When we reach the next milestone ⛳️, we'll share our new insights with everyone.