Back to all episodes
Season
1
| Episode
2

The Superpowers of Running an Open-Source Startup

Jason Bosco, co-founder of Typesense, joins the show to share how the open-source search engine has scaled to 10B+ searches per month, 19M Docker downloads, and 23K GitHub stars. He dives into why open source has been a superpower for Typesense's growth, from building a passionate community to maturing features faster than closed-source alternatives.
Ben Rometsch

I think this is a record. Jason, this is the third time you've been on this show, I think. Once I told my story and you took my seat. I want to hear about Typesense and so welcome back and thank you for joining.

Thanks, Ben. I remember the last time. That was fun, like flipping the rules.

I really enjoyed it because I was itching to tell my story and you were the only one who I interviewed who was like, “I like hearing your side of things.” The first time we spoke was a few years ago. We just had a little chat about how important it is when you're bootstrapped that the time can work with you rather than against you. A few years has gone on, and in that time, I’ve seen a bunch of massive billboards in the valley and a NASDAQ ticker in Times Square. Do you want to just give us a bit of an update on Typesense?

It feels like an eternity ago on one side, because so much has happened since the first time we spoke. In terms of stats, right now we're doing almost 10 billion searches per month. We're continuing to grow there and we have about, I believe it's 19 million Docker downloads now, and then we have 23,000 GitHub stars.

The number of features we've also launched since the very first time we spoke in 2021, this thing called AI blew up, algorithms blew up. Adding features with a touch of integrating some of these LLMs capabilities has massively widened the scope of Typesense as the feature set inside of Typesense. Also, interestingly, search as evergreen as a problem as it has always been, now has an even more increased scope because it turns out that LLMs as one of the most fundamental things still have to rely on search to get right the latest information. It's a very exciting time and we're just continuing to see the prominence of search also increase and we're bringing some of these into types since as well.

We’re involved in another business and we do use Typesense’s engine as part of the thing. We get some data, do a search through a Typesense index, get some results back and then feed that into an LLM. That's interesting. You always think you're the only people doing that sort stuff and then loads of people are, so that's great to hear.

I wanted to talk about the superpower of being an open source startup. I feel like you guys have really heavily embraced the open source aspect of your project and your business. I just wanted to go through some themes that we felt at Flagsmith were important and then see what things you had in your head about how they've really benefited.

It's really interesting because we've got almost 2.5 billion API requests a month feature flags, which is about a quarter of yours. We've got almost exactly a quarter stars than you have on GitHub. That's a little bit weird. I wouldn't have expected the ratios of those two numbers to be exactly the same. I'm trying to wonder now who's doing more compute cycles per request. That's interesting. Let's not go down that road.

This is quite interesting that the ratios are alike.

One of the things that Typesense always felt like to me as someone who's an end user of your product is that it always felt like it had a strong community behind it. Was that something that you designed from the start or was that something that just came about naturally and you had to try and encourage? What was the story behind that?

For us, it wasn't something that we explicitly set out to accomplish when it comes to a community around the product. I would say the open source ecosystem has this natural inclination to form a community, which again, we didn't realize when we started this. I'm happy that things blossom this way, but I feel like when you put something out there, open source, people are more likely to collaborate with you and use the product to begin with.

 It's hard enough building the product, but then how do you get people to use it? I think open source helps reduce that barrier because people can run the code, look at the code, and then identify any things that need to be changed to get it to work with their use case. I think that mechanism that open source allows naturally attracts folks to form a community around product.

Typically, I’ve seen folks define community as people who contribute code, but I’ve seen in our case that it's not just code, but also people giving us feedback about how certain features would work better if we added these other things or how do we fine tune this particular parameter that increases performance significantly. Maybe they've run into a bug and instead of asking us for why this bug exists, they're able to identify the root cause of the issue and suggest improvements there.

Sometimes, even just asking questions, I’ve seen typical closed source SaaS products, the amount of engagement you get from users there when it comes to how the product should evolve is very transactional in nature. Here somehow with open source, I feel like the mindset of people participating is one of collaboration where folks also feel a little bit of ownership of the project for better or worse.

Sometimes, folks also tend to take a lot of ownership and come at it with guns blazing in how they communicate what is important, which is I would say too much of that, yes, it'll burn people off. I think that the fact that someone brings so much passion into sharing their feedback, sometimes I find it interesting and appreciative as well in that they feel that this is so much of a thing that works well for them except for this one little thing. They add a lot of voice behind why that little thing is important for them.

I think open source has helped, in our case, naturally foster that sense of community around the product and I would say has massively helped improve the stability of the product and also the maturity of the features in the product. The example that I always fall back to is there was a time when we didn't open source all parts of Typesense. There were some features we held back in like a pro version, and at that time, we didn't have Typesense Cloud, but eventually we decided to start Typesense Cloud.

At that point, we said, “Let's just open source all of Typesense.” When we did do that, we ended up finding some bugs in previously closed sourced parts of the code base that people then started using and giving us feedback, and then people identified some limitations there and that started giving us feedback. I would say that is the perfect example for what all the other things that I just mentioned before this. Even within our own product, the parts that are closed source and then which we eventually open source ended up maturing much faster once we open sourced it. Even within our little ecosystem, I’ve seen the impact of going from closed source to open source just within our own circumstances, which I found very fascinating.

I think because once it's open, it is getting all the sunlight on it, isn't it? I'm sure thousands of people have gone and downloaded Typesense and started using it, and then if they hit a bug or they think it's a bug, generally, you're going to find out about that. They might not interact with you in any other way whatsoever rather than pulling a docker image or whatever.

I'd do that. Maybe I'd assume it's not a bug and I just haven't figured it out or whatever. The sunlight that it puts on the product, I remember when Flagsmith was getting more popular, it was fairly common to find issues in not core parts of the platform, but stuff that's a bit of a clang, and then now, that's like really unusual because you just hear about it if there's an issue. The other thing I'm just thinking about what you're saying there is if I'm having a conversation about the business that I'm working on, and it's open source, no one's ever been disappointed when you've told them the fact that it's open source.

People get excited when you say it's open source. Someone who was looking for open source alternatives didn't even think about open source alternatives to like a closed source platform that we were looking at. I think someone hinted at them, “Maybe you should look for open source,” and they found out Typesense and they were telling me how awesome they thought that particular moment was that someone was actually building an open source to a closed source platform. They were delighted and to explore it further. I’ve definitely seen that spark of like, “Thank goodness someone is building an open source product.”

The interest level. If they're an engineer or in technology, the interest level never goes down if you tell them that piece of information. It always goes up to some degree. Actually, there's an interesting segue. You talked about the marketing you're doing and you're doing a bit more real-life marketing billboards, NASDAQ, Times Square posters and stuff.

How conscious or how much to the front do you bring the types senses open source when you are figuring out the creative and the copy for those ads? How'd you go about figuring that out? It is a bit of a balancing act. Ultimately, those ads can cost a lot of money and you want to get some ROI on them.

We did hundreds of billboards in like San Francisco, and we did iterate a lot on the copy. Initially, we were trying to come up with something clever or something that hint at Typesense being open source, but mainly talks about other things. As we were iterating, what we realized is that for someone who has not heard about this space at all, trying to get a subtle hidden message across would be a much more challenging aspect.

Also, search exists everywhere, but people don't consciously think of things as search on their day-to-day unless they've worked on a search feature and they realize, “We actually need a search engine for this.” What we decided to do was something super simple and in fact actually highlight open source in every single billboard that went out. We decided to say it is open source.

Another aspect is that typically people don't see ads for open source projects. Another thing that we thought would stand out is that how is it, and that an open source project is putting up these billboards everywhere in SF. I think that definitely resonated. I’ve heard a lot of feedback, like people DM-ing me saying, “I didn't realize you guys were this big, doing all these billboards in SF for an open source project.” In fact, some folks also said, “We assumed that you were VC-backed.”

Looking at all these boards, and they were pleasantly surprised that we're actually revenue funded fully 100% then. Going back to your question, I think for us highlighting the fact that it's open source, I feel like also leads to the community, again, celebrating for you on your behalf as well. We've built this product together in collaboration with the community. I'm using that in a very broad sense. Every feature request, every question has led to us either building the feature or improving the documentation or building demos on how to use the product in a certain way.

It helps round out the core feature set by helping us again with the documentation, like improving the documentation or putting out all these demos. The fact that Typesense is a usable product for a good number of use cases right now is a collaborative effort with the communities. For example, we have a Slack community and just asking questions on Slack. It is a public community, so we index all of that. You make it publicly indexable, so Google has access to it. LLMs have access to it.

Sometimes, when I respond to these questions, I specifically also correct some LLM generated code that people share in our Slack community so that then hopefully, in the next training run, these models pick up and fine tune. We're able to do that because people are asking questions. We can't answer these questions.

Also, think of all these different use cases. There's writing the software, but then using the software and in real world scenarios, that's what expands the pressure, tests your own assumptions. I think doing all these in real life marketing efforts and adding so much visibility to the product, I’ve seen a sense of people being very happy for us that product exists and that we're doing well. From their perspective as well, it's also a sign that we're going to be around for a while. We're financially doing well. I think that's the signal that we intended to amplify with these billboards and the NASDAQ and etc. It has definitely accomplished that goal way more than even what we expected.

One of the things that I'm a big believer in the product that you build is defined more by what you don't do than the things that you do. The decisions that you make, say, “No, we're not going to make that feature. That's happened Flagsmith fairly often where someone's come on a Discord or they've posted on GitHub and said, “Wouldn't it be great if you could do X or Y or Z?” You can understand the use case for that. It makes me a bit sad because that person then gets the experience of us discussing it maybe in public or maybe in private or maybe in a bit of both and then saying, “No, we won't do that and here are the reasons why.”

That process of going, “Should we do this? No, we shouldn't. Why shouldn't we do it?” There's so much value that we get as a team for making those decisions and figuring out what the reason is that we are not going to do that. Sometimes the reason is because it would take 8,000 hours of engineering time to do that or whatever. More often than not, it is not an 8,000-hour thing. It is just like that's going to take the project in a bit of a direction that isn't the thing that we perceive it to be.

It just makes me a bit upset because that's not a happy path experience for someone contributing to an open source project, but we definitely leverage like a lot of value out of that process. Maybe we should do some swag for people who have a feature request that we say no to or something like, “I asked Flagsmith to implement a new feature and all I got was this hoodie,” or something.

That's a cool idea, for sure.

I might have to write that down.

One thing that we've done is when folks open a feature request and on initial read, if it sounds like so out there, even though our gut reaction might be is that we probably might never do something like that, what we do in those cases is just let it sit. Any time in the future someone else mention something similar, what I specifically do is ask them to not just do a thumbs up, but also add a little use case for the scenario where that they will be able to use that particular feature that they're proposing.

Over time, as you read those scenarios, I think that's where it gives you more context into what folks are thinking about when they're requesting this. There have been case or two where a feature we thought probably shouldn't be adding to Typesense has helped change our own minds as well because just people using it in interesting use cases, which again, given that people can use it in so many different ways, it's hard for us to have all that context unless users explain it to us.

I’ve been trying to be very diligent about asking people to go write what use case they have in mind for this in addition to the feature description. That has helped out. There are still features which is sitting there without just gathering this feedback, haven't put a name to what state that issue is in, but it's more like we're waiting or more conviction state for as more and more folks add their use case to it.

Some of them are end up do add a lot of complexity to the product as we add them. I would say, I think sharing these different use cases and scenarios helps us convince ourselves that it is worth investing the effort because if we do this, then it's also all these other things that users have in mind as well. I think a thumbs up and a comment about the scenario is what helped us.

In my head, I'm thinking like the requirements for research engine, they're quite functional and I don't know, like a feature flying platform, there's a bunch of different directions you could go in. One of the things we've found quite difficult is drawing the boundary lines of what we are and aren't going to do and going to be. I would imagine that the core functionality of a search engine is going to be unusual for you or for someone to raise an issue that's like out there thing. Have there been any cases where that's happened and then it's made it into the core code base?

Yeah. What I’ve noticed is that it's the delineation of what should be actually two types of things which have been out there. One is what should the default behavior be that people have different use cases, different defaults work in different cases. We have to be very conscious about what works for 80% of the use cases. We've had some discussions around that.

The other thing is what should be done inside a search engine versus what should be done inside your application post-processing. That's where at times, we've had to say, “No, you could pretty easily do this in your on the application side.” In some cases, we've said, “Enough people have to implement the same thing over and over again on their application side, so let's bring it to the search engine.”

One example that comes to mind is ability to filter within a arrays of objects, like nested arrays of objects within a single JSON document. That would add a lot of complexity to the code base if we introduce it. We let it sit there, but eventually, people kept asking us for it and we did end up incurring that complexity.

On the flip side, another example is people want the ability for the search engine to save search terms and then execute them periodically and then send out web hooks when certain results are matched. Elasticsearch has this feature and folks are like, “We use this in Elasticsearch. We want to switch to Typesense. Can you build that in?” That issue is an example of one where like, you could write cron job on your site that does this.”

We have to like, “Do we really have to put it inside the search engine?” The last thing we want is al also to become yet another Elasticsearch and introduce all this complexity. That would be an example of a thing that a few folks have mentioned it, it's sitting out there, we're not morally opposed to it, but again, if enough folks ask for it and say, “Here are a different use cases where it'll simplify this,” then we may implement it in the core.

In terms of hiring and building a team around the project, how's that been affected by being open to you?

All the other companies I’ve worked at before Typesense were not open source companies. The way we've been able to switch our hiring process at Typesense because we're open source makes the process so much more easier and streamlined. It also removes all of the other overhead in hiring that we've used to have in the past.

These days, anytime we're hiring folks, it's literally point them to GitHub. If we point them to an existing issue and ask them to solve for it, we say it's going to be a paid project like, “We'll pay you for it,” even as part of the hiring process. We treat it like just another open source contribution, but we're going to pay you for it because we're asking you to do it.

Everything else is like, everything is on GitHub. We're going to communicate like any other open source contributor. That's one style. The other style is we say we come up with a scenario and say, “Write code that is still open source that solves for this, but we'll collaborate with you like it's an open source project. Everything is written.”

We start out with that as the focus because that's what they're going to be doing day in and day out eventually if we end up hiring them. We want to simulate that as closely as possible in the interview experience as well. The fact that we're remote and distributed also lends naturally to this. In fact, we use a lot of written communication over face-to-face communication.

Thankfully, open source contributions typically also work in that fashion. Telling people who we're trying to hire that it's just like contributing to any other open source project except that you're going to get paid for, it makes for a very good hiring story as well. You're able to attract good candidates because the work that they're doing, open source is going to also be part of their like public contribution history throughout their career.

Going forward as well, it's almost like too good to be true a thing. You're paid to write open source software day in and day out. I think that messaging definitely helps in attracting talent and also, all of the other public statistics that we have, it's a well-known popular open source project, being able to contribute to a popular open source project and get paid for it is also nice. It helps improve your brand. That, compared to like a closed source SaaS platform, it's like your work lives and dies with the company almost. Here, it's like your work is out there public for everyone to look at.

The other aspect is also you get to directly work with your users as well, which typically might not be the case. I would say it’s harder to come by in a SaaS product. Here, you work on an issue, you respond to whoever asks for that, and then you communicate directly with them and you know iterate with them. Those quick feedback cycles with users is, somehow, I would say inherent with open source work. I wish it was like that with other closed source platforms as well where you're working so closely with the development teams behind these SaaS products, but somehow, that's not the case.

I wonder why that's definitely true and it's interesting to consider. I wonder if it's predominantly around team size, but I know what you mean. For some reason, there feels like there's quite often like 2 or 3 layers of, I don't know, salespeople or customer success people or whatever, or second line support people. Most large open source projects have a Slack or a Discord or whatever and there's a bunch of engineers that hang out on them.

One thing that I'm thinking is, we've seen this at Flagsmith where because the work that you're doing and most of the conversation that you are having is public when engineers are considering maybe applying for a role, they can look at the public repository and see how it's being built, whether it's very light touch pull requests or if they're super intense and lots of people commenting on them and stuff like that.

We've had a bunch of people say that when we are hiring, I looked at your repository and that was a team that I wanted to work within. Whereas if you were hiring for a closed source company, you'd have to trust what you've been told about it and then fill in like hundreds of blanks about how things actually work.

It's almost like your team culture is also out in the public when you're open source, how the team works with each other and how things are prioritized. Things like that are pretty transparent. That's a good point. I didn't think about it that that way.

One of the engineers on the team raised an issue because some of us are obsessed with this video game called Balatro, which I don't know if you've heard of. He raised an issue in our GitHub that these eight people on the team had never played Balatro and they need to try it out. No one had ever raised a joke issue before on the repository. He didn't ask whether that was a good idea or not. He just did it. Everyone came on board.

If you think about it, we didn't do it for this reason, but it's a potentially great hiring thing because you can see that it's a little bit of a sliver of the culture coming through in the team. Just one last thing before we finish up. You have mentioned this already a little bit. In terms of how AI and LLMs affecting you and there's a lot of talk about how there's very few pure open source large language models that are open source from soup to nuts, from the very first, “Here's the 87 gazillion bytes of data that we start doing the training for.” How's that world affecting Typesense in terms of the code that's being written but also just the larger ecosystem?

What we've been trying to do is leverage the LLM behavior of being able to generate either syntax or natural language inside of Typesense. Given how fast consumer behavior is adapting to consumer search, it's only a matter of time when you know as consumers start expecting that with other applications that they're using, developers would want to mimic that consumer facing experience. That's one of the reasons we added what we call conversational search, which is really rag behind the scenes.

We added that within Typesense where you just ask a question in natural language and then behind the scenes, we do semantic search and then you get the top results passed on to the LLM, ask it to formulate a conversation response, store it in memory, and then support follow-up conversations with that memory context as well.

We added that built in because we started working on this when ChatGPT and the chat-based interfaces started taking off. On the other side, as these LLMs started to mature, you're able to get more structured responses out of them. Now, a feature that we're about to launch is being able to take any arbitrary sentence and then converting it into like a defined Typesense query instead of Typesense.

Instead of users having to search for something and then look at all the filters and then try to refine the results, we try to infer the filters directly from the search query like do intent detection and all of that. Something like this, people had to build like custom models for in the past. Now, with the magic of LLMs, were able to basically prompt the LLM and have it “learn and refine” types and syntax and have it generate the types and query for us. Instead of the user trying to figure out what the exact way to get the search results, we let the LLM figure out what the best search query to execute on behalf of the user is.

I’ve noticed that probably the most insane experience that I’ve had working with LLMs around software engineering is getting them to write or iterate on SQL queries. It's such a perfect domain for them. In a few years, they're going to seem small in terms of context windows and stuff. I remember I wanted to do this really complicated thing in SQL and a few years ago, I probably wouldn't have even tried.

I probably would've had to spend a day or two writing a script to do it and then had to leave it running for 20 minutes or whatever. I spent ten minutes giving this back and forth. They're just unbelievably good, so I can imagine that the experience for that with writing what could be a really complicated search query could be crazy good.

It's also helpful that, thankfully, these LLMs have also crawled Typesense documentation. I would say they have 90% of how to create these queries. Now for us, it's a matter of trying to figure out which of these queries are not being generated or being hallucinated that we have to have it corrected on the fly with these system prompts. Working on that feature was quite fascinating. If you generate something wrong, literally instead of fixing code, you're fixing the prompt to give it another example of what not to do in a situation. That's defining software behavior.

I submitted a pull request for a prompt change. I was like, “Wow.”

That's on the consumer facing side and then on the code generation side, using LLMs as a coding agent, I’ve been experimenting with it. It's interesting how, for us, anything that's algorithmic spectacularly fails at this point in time at least. It's been hard to use that on the core search infrastructure type of things. What it is good at is generating things, the UI aspects in the Typesense Cloud, some of the UI components.

We use view on the Typesense Cloud and there's plenty of examples out there in the training dataset. It's able to just do its thing. It gives a lot of time writing these complex frontend components. Even there, on the backend pieces for infrastructure provisioning etc., if it's a little too involved, it doesn't do well. You still have to review every single piece of code as you're writing through it.

Personally, I found it most valuable on the UI components. That's where it saves a lot of time and anything to involve with external API calls, let's say you're integrating with different API platform, they have their documentation public instead of you reading through every single thing, you just throw it the docs at the LLM.

Have it at least give you an initial idea of how to interact with API. I would say I’ve found it helpful with boilerplate work where you don't have to think twice. It's just not more than one way to do it. Do it. It's just templatized, easy to write. It saves a lot of time and things that I would sigh about doing, now I’ve been able to accelerate significantly.

With Django risk framework, and then getting it to write sometimes more than just the stubs for the endpoints and then also the unit tests, it's amazing. It is just amazing at writing those and sometimes, it's the last thing you ever want to really be doing.

The automated tests, especially these UI automated tests, are painful to write. I would say it's even painful to write for LLMs if you start out on a blank project. We use Rails as well on the backend. Our spec tests, there's so much things that can go wrong when you write that those tests and then mocking and then stubbing some of the behavior and all of that. It's just explodes in complexity. At least it's able to get this scaffolding right and then you can fill in the blank as you need to.

I’ve noticed that it's amazingly good at writing mocks. PIE tests got quite complicated, very powerful mocking framework and then Python's pretty I'm sure Ruby's the same. It's pretty good for wiggling around in the inside the guts of the interpreter and getting it to do different stuff. I'm learning how to do these sorts of things as well and it's just thrown it out there. Before we go, in terms of Typesense, what's coming up next for 2025?

For us, the big thing, I mentioned this before, but we're calling it natural language search, just like how we've had conversational search and then we have geo search and full text search and image search and all these. Along those lines, we're calling a natural language search, which is essentially having an LLM pre-process user query and then convert them into more structured types and queries almost just like Text-to-SQL, it's text to types and queries built inside of types.

I'm super excited about that. I think it's going to open up a wide variety of use cases for retrieval. After a lot of experimentation, we've finally found a nice use case for MCP, which I think is actually useful outside of all the hype that that went through. We're basically working on a toolkit for Typesense developers to then build their own MCP service on top that interact with the Typesense data set under the hood. These are fundamental things that can open up broader use cases.

Especially because I would expect search engine results must be one of the most common rag part bits of data that you would want.

Exactly. Typically, for an LLM to respond, it has to do some form of search, whether it's grepping or doing web research. We're now thinking is that essentially adding Typesense as a different way to do search on a data set that you might have indexed in Typesense. It just opens up more possibilities for you and your users as well.

I would say these are two big buckets of work, besides all the other things. People are still asking us for all these features that exist or enhancements on the features that exist as they use it and give us feedback and as we round out these features. That's business as usual. We have to get to them and do more marketing as well.

You don’t have a Super Bowl ad coming up?

No. That'd be cool, though.

Maybe in 2026.

Once everyone realizes the importance of search and it's everywhere and we're like, “Use Typesense.” That would be cool.

Jason, as usual, it's been a pleasure talking to you and thank you. Good luck for the rest of the year.

Awesome. Thanks. It was great catching up and thank you for having me.

From Side Project to Times Square: Practical lessons from Typesense for commercial open-source founders

Typesense’s trajectory shows how to turn open source into a durable business: build in public, let the community harden the product, say “no” early and often, market the fact it’s open source, hire through the repo, and use LLMs where they actually help rather than where you wish they would.

1) Community isn’t a campaign, it’s the build system

Open code reduces adoption friction. People run it, read it, and promptly tell you what you’ve missed. That feedback isn’t just pull requests; it’s bug archaeology, parameter nudges, and “your default is hurting us” field notes. When Typesense opened previously closed bits, those parts matured faster under real use. Sunshine, meet software.

Do this

  • Treat issues like mini user interviews: require a short use-case narrative, not just 👍.
  • Keep support public and indexable. Let Google (and the models) learn from your answers.
  • Add a label for “waiting for conviction.” It’s not “no”; it’s “bring evidence.”

2) The discipline of “no”

Two lines matter:

  1. Defaults should serve most users most of the time.
  2. Keep engine concerns in the engine and everything else in application code—unless everyone is re-implementing the same workaround, in which case you’re the repetition.

How to triage

  • Tag requests engine-worthy vs app-layer.
  • Maintain a complexity ledger: test matrix growth, perf costs, docs debt.
  • If you must say no, say why. People accept boundaries; they resent silence.

3) Market “open source” plainly (and publicly)

Typesense went big with billboards in San Francisco and a NASDAQ moment. Subtlety did not perform. “Open source search” on the tin did. Engineers’ interest invariably climbs when they hear it’s OSS. The novelty of an open-source ad does some heavy lifting too.

Execution

  • Lead with “Open Source <Category>.” Spare us the riddles.
  • Pair it with proof (throughput, stars, downloads, latency).
  • Send clicks to a one-command try path or a hosted playground.

4) Hire in the open

The strongest signal is your repository. Typesense treats hiring like OSS: paid trial issues, written collaboration, real reviews in public. Candidates self-select after watching how you work, and their effort becomes part of their portfolio rather than vanishing into HR folklore.

Mechanics

  • Maintain Good First Paid Issues with clear acceptance criteria.
  • Keep contribution guides current; optimise for the first PR.
  • Publish ADRs and an engineering decision log. It’s culture, documented.

5) LLMs: useful servant, dreadful master

Search hasn’t been replaced by LLMs; LLMs require search to stay grounded. Typesense built RAG into the engine (conversational search) and “text-to-Typesense” to turn plain English into precise queries. Where models excel today: UI scaffolding, boilerplate, tests and mocks. Where they stumble: algorithmic core and anything with sharp edges.

Sensibly apply AI

  • Ship the scaffolding customers keep rebuilding: embeddings, rerankers, guardrails, eval harnesses.
  • Treat prompts like code: version them, test them, accept prompt PRs.
  • Publish a small, mean evaluation set to keep hallucinations out of production.

6) Choose metrics that buy trust

“~10B searches/month”, “19M Docker pulls”, “23K GitHub stars” do more than dazzle; they reduce perceived risk for buyers and candidates.

Show your working

  • Adoption and reliability: active deployments, upgrade success, p95/p99 latency.
  • Public status, changelog cadence, and post-mortems that actually explain things.

7) Revenue-funded ≠ small

Typesense’s message was clear: profitable, durable, and quite visible. For commercial open source, durability beats theatrics. Spend on things that compound—docs, examples, and tooling—then advertise the fact you’re still here.

Translate this

  • Pricing that scales cleanly from hobbyist to enterprise.
  • Talk about efficiency without chest-beating: low churn, sensible margins.
  • Invest profits back into community goods: docs sprints, examples, hack days.

8) A light-touch operating cadence

Weekly

  • Review the top 10 issues by use-case quality, not emoji count.
  • Ship docs or examples within 48 hours of a feature.
  • Rotate an engineer through public support to stay honest.

Monthly

  • Post Roadmap Notes: what moved from “waiting for conviction” to “planned,” and the nos (with reasons).
  • Refresh benchmarks and prompt guardrails with new failure cases.

Quarterly

  • Run a complexity budget: prune flags, deprecate rarely-used features, reduce surface area.
  • Host a community call that spotlights user builds, not just your release notes.

Final word

Open source isn’t just your licence; it’s your distribution, research lab, support queue, hiring brand, and now your AI substrate. Keep the core sharp, let the community pull you towards the right features, and package the AI bits so teams stop building the same scaffolding. Understatement aside, that’s how you get from side project to Times Square without losing the plot.

Subscribe

Learn more about CI/CD, AB Testing and all that great stuff

Success!
We'll keep you up to date with the latest Flagsmith news.
Must be a valid email
Illustration Letter