Interview with Joel Griffith: Founder and CEO, Browserless
Ben Rometsch
November 23, 2021
Ben Rometsch - Flagsmith
Ben Rometch
Host Interview
Host Interview

Go find a big open-source project that lots of people are using. If people are having problems with that. There's probably something that can be done about it to make it better

Joel Griffith
Founder and CEO
Joel Griffith
Founder and CEO

Check out our open-source Feature Flagging system - Flagsmith on Github! I'd appreciate your feedback ❤️

Episode Transcript

Joel got introduced to me by some guys who’ve been helping me with Flagsmith. He has an interesting story to tell. He’s also got some open-source royalty credentials being a Senior Engineer at Elastic. Do you want to tell us a little bit about your products and how they started?

The product is called Browserless ( The name is a play on the server list infrastructure where you don’t have to maintain a server to do your backend stuff. Browserless is the same thought in that you don’t have to have a browser running in your own infrastructure to handle browser-related activities. It’s an interesting path how I got to it. Originally it was to do some scraping of retail pricing data for a wish list site. I was trying to build a simple wish list site to get pricing data and people to post links. We could share that with family and friends. If somebody’s birthday came up, somebody already bought that for them.

I ran into a couple of single-page app retail sites. It may have been Target. I can’t remember the exact brand. Trying to get HTML from them was impossible. You have to run a full-fledged browser to parse JavaScript, do network requests and then you’d get pricing data. We could get that from it. At that time, Headless Chrome was a recent thing. Puppeteer I don’t think was even a thing yet, which is the leading library for doing automation with the Headless browser. There was no place to run this type of thing. There were plenty of Selenium ESC providers out there but nothing like using these new libraries that were starting to come out.

That was the birth of Browserless. I realized, “I need a way to do this and have some fine-grained control on how it goes.” I started a company called Browserless. We essentially host Headless Chrome for you, operationalize it and make sure all the fun things like fonts and system packages are there to run it sanely so you don’t worry about it.

It’s packaged up inside of a Docker container. We do have an open-source on Docker Hub, the whole core engine. You can take care and run it yourself on your cloud provider. There’s a bunch of other fun things built-in too like queuing and concurrency limitations because Chrome loves its RAM. We have to put some parameters on how much you can run at a time. All that stuff is built-in to it and very configurable. You can do all sorts of fun, crazy things with it. That’s essentially a browser. This is Headless Chrome as a service you can think of it.


Is it a coincidence that the new APIs in the browser came about the same time you were looking for a solution?

It was a good coincidence, almost a tragedy, I would say. I was working on a library to do some high-level stuff with Headless Chrome. Chrome has a very complex API internally, the DevTools Protocol. When using your Chrome debugger, it uses DevTools Protocol internally but it’s very terse. You have to set up and enable a lot of these domains to get things working in network monitoring, for instance. You flex set that up and make sure things are done in the right order. This is a lot of complex system management.

I was working on the library to do all that and get it working. It was called Novelia at the time. About 1 or 2 months after that, Puppeteer came out. I was a little upset. I was like, “I tried my library to run Chrome but got crushed by puppeteer because it’s backed by Google.” Everybody was about it. It was on Hacker News. I didn’t know what to do because that was going to be the angle for this business. It was like, “Here’s this great library you can use. We host everything. There’s a way to the instrument. Things run in parallel versus serial. A lot of DevEx stuff was built into it, developer experience.” At first, there was this sad period. I cried. I got steamrolled by Google and their awesome library.

For about a week, I was watching their library, seeing how people were using it, looking at issues and stuff like that. I happened to trip upon, “What’s the number one issue with this library?” At that time, the number one issue was, “I can’t run this in Ubuntu, Red Hat or wherever it was that people were trying to run this software.” I was like, “Maybe they’re still a good business angle here because people want to run it. They want to use it in their production systems but it’s hard and convoluted to do that.” I spent about a weekend hacking everything into a Docker container to see if I could do it and get it through all of the trials of getting a single work in Docker.

It was painful. I could see why people were looking for help because it was not an easy task to find all the system dependencies and fonts. Once you got it running and starting it up, it would crash on random things. There are other packages that make it easier to run Chrome operationally. That ended up with instead of having a library first kind of a play, it was like, “Let’s operationalize and make it scalable.”

I’ve used that advice for a couple of different people looking for, “What do I build? What do I need?” I was like, “Go find a big open-source project that lots of people are using. Start by issues that are the most common or reacted. People are having problems with that. There’s probably something that can be done about it to make it better.” That’s exactly how Browserless started, looking at an issue, noticing a trend and building towards that.


Flagsmith has exactly the same story. Kyle, who I was working with, noticed that Firebase had a GitHub issue that had hundreds of replies, thumbs-up and thumbs down with like, “Why can’t I use these feature flags on the web?” It was on the web and I can only target mobile devices. That was the actual genesis for the idea. We’d been looking at feature flags but we weren’t sure if it was the right project. That GitHub issue was the thing that got us to say, “If there are hundreds of people that give a shit enough to spam this issue with like, ‘Why doesn’t this do this? You’re Google. You must be able to make this exist.’” That’s great advice.

The other side of that too is where things get a little trepidatious. Once you have a solution there, you can post it as a common assuming that the issue is archived. You get a lot of visibility because people are watching that thread to see when it will get fixed. Post-launch and everything else was a growth hack strategy for us. Find stack overflow posts where people are posting it. You try to be sensitive to not be spamming like, “Check out my business.”

We did a lot of, “Here’s how you fix this. If that’s too much of a chore for you to do it then you can think about using our service.” Somebody had mentioned it at one point. They characterize it as, in order to find your customers, you got to meet them at the place where they’re having the problem with the solution. That’s when they have stuff that needs work. GitHub issues are great. “I am having this problem. I need a solution.” That’s crazy that it worked out for you the same way.


I’m thinking of a little side project if I’m scraping GitHub for their most active and common issues across everything. I bet there’s a bunch of businesses in there somewhere that you could probably extrapolate from. I’m trying to ask you a slightly annoying question but how did you go from weekend side hack to like, “I’m going to make this into a business?” Did it get picked up straight away from that Docker container or did you get a lot of feedback from people? 

There were a few people from that GitHub issue that had reached out to me personally and were interested in running it because there was nowhere to run it at that point. I had about 2 or 3. You could think of them as closed alpha, closed beta folks that were trialing out what would have been built in this container. We were betting like crazy back and forth. I was flying back and forth to New York for work. As soon as I got on that plane, I had a list of what I needed to do and I was heads down. There’s are no distractions in an airplane. Hopefully, no distractions that can affect you materially.

I was using any time I could have in hotels or waiting to hack on these. We had 2 or 3 people that were very interested in it and wanted to do something. They were hacking on it back and forth. The tricky thing for us originally was that Chrome is very resource-loving. Getting something scalable was tricky. I originally wanted to do a big shared service that handles sessions from across the board. That turned out to be a tricky thing to get right. One of your bootstrapped and self-funded in the beginning, you don’t want to be spending hundreds and thousands of dollars on infrastructure right out of the gate.

Finding a good fit for, “I’ve only got two customers. How can I make this thing start, making some profit but not costing a lot of infrastructures?” The thing that kept me afloat early on when I only had 5 to 10 people was a very streamlined approach to product and resource intensity. People would sign up for a plan. When they signed up, I would go to the DigitalOcean, which we hosted and spin up the infrastructure that met the criteria for that plan. It’s not the best sign-up experience. It takes a couple of minutes to get everything set up and running and update load balancers to say, “This API token should go to these machines.” It worked and got enough people using it originally. It started making a little bit of money and paying for the baseline infrastructure.

From the first customer, we were in the block. It wasn’t much financially but it was something where I wasn’t losing money every month paying for servers. It was a lot of tiny baby steps. That was something that was nice for me personally. You’re working full-time, may have a family and other commitments outside of work. Having some that you can grow with but not go 0 to 100, lose your entire free time and life was a tricky balance. Having small amounts of customers coming in and growing it slowly. We have usage-based accounts, which is a shared service. It’s because we have made enough over on top of what we have to charge normally that we can provide a pretty easy shared service that’s not too heavy-hitting for people that are budget-conscious.


I wouldn’t have the first idea about running a desktop publication in a Docker container, let alone resource managing it. It’s interesting because it’s a domain that’s completely counter to probably what most engineers are using Docker for. Does it run in a Linux container?

Yes. Our from in Docker is Ubuntu. We base it off of a very fat image to start with. In some cases like 3D rendering, you need a bunch of interesting packages to get that all working and operating properly. Our Docker container image itself is all put together on the magnitude of 1.5 gigabytes, which is fat for a Docker container. This is because there are fonts. Fonts are bad because there’s a ton of them. It was tricky. Years ago, Docker was becoming more mainstream. Everything was using it.

In my day job, we were still transitioning services over to it or starting to. It was new to me. Part of it was learning Docker, how you run a Docker file, what all that means and screen up the syntax over and over again. “Why isn’t this building?” The tool has got way better and a lot faster too for that matter. It ended up being a good fit. We abused a little bit about some of the things with Docker but in the end, it worked out.


What aspects of the code that you’ve written are open compared to close? You’re saying that the core engine is open but then all that stuff around is your proprietary business.

Every API that we have in our hosted product, you can grab and run from that Docker container. The stuff that’s proprietary is how we load balance. We support Selenium and it is all over HTTP. It’s super chatty. If you’ve got to pull 100 servers, getting into the same session to go to the same box, can be tricky if you don’t have sticky load balancing. Our load balancers take care of that. Our API servers do a lot of session management across a huge pool of servers as well. There’s a lot of making the state of the world happen that is on our API servers that aren’t baked into the open-source image.

That was by design because all of these different cloud providers have different ways of handling their own load balancers, their own ways of doing sticky sessions. It’s hard to be prescriptive and how that is handled. We more or less are hands-off on that. “If you’re running on this, AWS are probably going to want to use one of their off-the-shelf load balancing products to do it.” Mostly engine works. Things that you use Chrome for but not a lot of sessions and state management across all of them. That’s dependent upon where you’re running and how you’re running it.


Did you worry that there was this new library or there was going to be this land grab that you might get left behind in? When you started that Docker image, who were the competitors in that space?

The ones that I could think of were the big testing vendors like BrowserStack and Sauce Labs. Those companies are focused on testing, especially Selenium QA-type testing. There wasn’t a direct competitor that I was aware of. We were probably the first people on the scene. In the back of my mind, I was always worried about what would happen if Google decided to create a first-class service for this? That eventually did happen about 6 to 8 months into it. They had support for Google Chrome and Lambdas functions. When that day happened, I cried. “This is it. This is the end of Browserless. There’s no way I’m going to recover from Google, putting down their foot and saying, ‘This is how you do it.’”

It turns out even that still has issues. There is a notorious issue for Puppeteer in Google functions where there are high start costs. It takes about a second for Chrome to start running inside of their function service before you can even interact with it. People were upset about that. They had done some movement in our space but it wasn’t enough to even capture a customer. A couple of hundred customers that we had at that point, 1 or 2 switched over to the Google Lambda product. Part of it was co-location. They wanted to be closer to their other API servers, which makes a lot of sense. It wasn’t even that big. Even in a crowded space, having somebody that does something different a little better can cause enough to build a business on top of.


I’m presuming there are competitors that are using a similar approach.

There is. Unfortunately, some of them are a carbon copy of exactly what Browserless does. Even down to the API as being identical. There are a couple of services out there that have a function API, which our function API is probably the most unique, exotic thing that we have, which lets you run arbitrary, no JS code with a browser in scope. You can be doing any other language in your backend, your REST requests over to our REST API requests and say, “I want you to run this node code because we don’t run node in our backend. Do this thing with puppeteering and return the result.” To me, that’s an out-there unique case for REST APIs.

It’s not REST. It’s an RCP protocol almost. Some of these vendors out there have that same function signature. It’s like, “You’re probably using our container under the hood with some modifications.” At first, it irked me. I was like, “That’s a blatant act, in my opinion.” If you’re exactly one-on-one copy and the REST API and it’s the same signature arguments, everything else. We were the first ones there. I’m not super worried about it. We’re going to keep “out-innovating” and moving forward with what we got because we got some cool stuff plan.


I’m interested in the positioning because when I think about heading this Chrome, I think about automated testing. Years ago, I worked on a startup that was screen scraping a whole ton of UK fashion eCommerce websites. We were coming up with the same problem, whether we needed to simulate browser or enumerate APIs for the single page app stores and things like that. Did you spend a lot of time thinking about what the use cases were and whether you should focus on one of those? I’m guessing that your customers have a wide variety of things that they use the platform for that you maybe never even imagined.

It’s exactly right. Originally it was for scraping. My chief thing was, “We should do screen scraping.” For the first couple of years, one person wanted to send PDFs, something I had never intended for him. He was like, “If you want to do PDF generation, here’s an API for Chrome.” He even wrote a little REST API wrapper over how Puppeteer does PDFs to make it easier for people to do that. Another person does it for screenshot memes. It’s screenshots, PDFs and scraping. There are some esoteric ones out there where they will fire up Headless Chrome, launch a game server, host it in that Headless Chrome and then people can join into it. They can record the session. The use cases are pretty wild.

Originally, I didn’t play too much into particular use cases. I want to focus on getting whoever is building what they’re building the best experience possible for those people. The target audiences were developers. The product was very developer-focused. There’s a lot of remote debugging you could do. That was the other thing that was tough with Headless Chrome. You’d write something that would work locally and push it out. For some reason, something would timeout. You get a blank page back or something.

Being able to see what the browser was doing was hard so we have a cool trick where we can start the Scrum session. We pause JavaScript execution right away. On our account page, you get a little link to view that session. You can remotely watch it from a little screen viewer and get the whole host of Chrome DevTools at the same time. It makes the debugging experience in production way better. It’s not trying to record everything and then send it to you after the fact. You can sit there, watch it go through its steps and see what’s going on.

Developer experience was the biggest thing for me. I want it so I can use it for the things that I want to use it for and use the things that I would want. We’re starting to see a little bit more tightening down on use cases. We’ll probably have something specific for scraping. If you want the best scraper experience, here’s what you would use, same with PDFs and other static assets that you can get out of how those Scrum. We’re trying to make the web more accessible and programmatic.


How did you come up with the pricing initially? Was it a question of adding a percentage or multiple on top of what was costing you? Did you have a free plan at any point in time?

I was concerned about free plans because there’s the possibility for somebody to sign up for a free plan and do nefarious things top of mind whether that be DDoSing somebody or trying to swing stock prices with one quick mass push to buy and sell pump and dump. I don’t want to have to deal with those kinds of dark problems. We decided early on, no free plan. What we instead did is we have a little demo server so there’s a free server that you can send 5 to 10 queries a day to get a feel for the service and how it works.

If it works out great, you’ve done the whole thing and you’ll know how to interact with us. It’s five requests a day. Once you hit, you start getting rejections back. You’ve met your free quota for the day. You can sign up for usage-based or whatever out there. That’s how it works. Pricing was simple. It was like, “What’s a livable margin that I can have at this moment to pay for my costs?” Pricing is still the one thing that I’m potentially worse at. It is hard to price appropriately.

Who’s your target audience? What value are they getting? How much does it cost you? What’s your cost to acquire customers? It’s this giant convoluted formula to figure out where the pricing is at. I have a lot of respect for people that do that because it is a very dark art on its own. It’s mostly still, “What’s a good margin? What are people willing to pay and not complain?” We did raise pricing at one point because we were getting flooded with people saying, “This is too cheap. You need to charge me more.” If people are saying that then there’s a good chance that you are not charging them enough.


The general wisdom is that you are never charging enough. People seem to know what they’re talking about but some don’t. It’s interesting as well because looking at your pricing page, it’s difficult for you not to look like you’re selling a VM. You’re not selling a VM but it looks like an expensive VM kind of thing. The parameters you’re pricing against are like buying a VM the way it ran.

It goes back to the fact that it’s hard to put a thumb on. If you’re running Headless Chrome, use this size of machine for everything and you’re good. There are all these different sizes and certain sizes that work well for certain use cases and they don’t for others. That’s why we eventually went with this choose your own adventure model where you pick the size, the number of machines and then you go from there. That has worked out well for the last couple of years but we’re going to take a look more seriously at that and see if there’s a better way to package that up.

It is also a point of confusion if you’re not very technical. If you’re a business owner and want screenshots of your shoes, you don’t care about what size machine and how many of them you need. You just want pictures of your shoes at the end of the day. It’s super focused on developers. Somebody would come into it like, “I’m buying a VM essentially with a lot of sugar on top of it.”

Revamping that is something we’re going to be looking at in the near future. The usage-based is the other one. That’s by far our most popular because you pay for the time for the browsers to open. That’s even simpler than some of these other Lambda platforms where you have to figure out, not only the size of the function but your data ingress and egress costs as well. There’s none of that. It’s a flat fee. It doesn’t matter how much CPU time or whatever you’re using. It’s nice in that regard.


Were you worried about people using the service nefariously initially?

I don’t want to say price bad actors out but if you wanted to do something DDoC, it gets costly quick, especially with Headless Chrome. Same thing with swinging a stock market. You need a lot of machines and IP addresses to do it. If you were to use browsers to do that, it would cost. Your cost to do that action probably isn’t worth what you’re trying to pursue. It’s better to do it yourself. In effect, we’ve tried to price out bad actors. It’s another thing I hear continuously with SaaS too. If you want to price out, people are going to be leeching off of support the entire time. That was another reason for not doing free accounts. Those two will suck the life out of you trying to support them.

That’s always been a concern. We’ve seen a few people do things and know how to close them down. We don’t actively necessarily monitor granularly what’s going on because of HIPAA and GDPR stuff. Anytime we get requests from law enforcement, whatever you got to comply with it. It’s been minimal. In 4 years, maybe 1 or 2 times I’ve ever run to something a little like, “Customer, you probably shouldn’t be swinging gambling bets on the platform. That’s not okay. We’re going to have to revoke your token.”

Scraping is another one too that’s murky because, for certain sites, it’s against their terms of service. You probably shouldn’t but then you get a judge. On LinkedIn, a judge weighed in that they knew what they were doing was okay. That was a win for scraping, in my opinion. I don’t know. The web, even though it’s been around for a while, it was still new. There’s still a lot of undecided legal matters that people assume are going to play out one way or the other but until a judge rules on it, you have no idea if that’s going to stand muster or not. Thankfully not too much bad action but we’ll see how that goes as we keep growing and new interesting things pop up.


In terms of the infrastructure, have you been managing it all by yourself? How big is the team?

The team is me. I’ve been managing all by myself. This is a big concern for me. I’ve got a crazy high bus factor at that point. It’s me running everything. At first, there were a lot of sleepless nights, the 1st or 2nd year, trying to fix things and make things stay up. I have a rule of thumb. If I’m doing something manually once or twice a day then that’s probably a good cost for automating it. A lot of the problems that people would have been like, “My node of workers is down. I need to get a relaunch.” Originally, that was a manual thing I had to do.

I had to go, re-instrument everything and change their token back to 0.3 new machines. That’s a feature of the UI. If you’re running and somebody DDoS you, there’s a button to reload your workers. Same with changing plans in the middle of a billing period. That’s always a fun one to try to figure out with Stripe. We use Stripe internally. Stripe’s prescriptive method is whatever the difference was, charged them on the next month for that difference plus what their new plan is. I was like, “I can’t take that risk. What happens if you go from $0 to $1,000 a month contract?” They see it at the end of the month and be like, “I’m not paying that bill. Good luck finding me.”

Writing code to figure out what that accrual would have been charged them right then and there for that change. Next month they pay whatever the new plan is. Automating a lot of it is good and making strong bets on tech. Docker turned out to be a good debt on our end that worked out. All of our load balancers are pretty much bare metal engine X. Some of them have a little bit of Lua scripts that are very minimal to do some state or session management but other than that, it’s mostly off the shelf. Anybody can look at it and get an understanding of what’s going on and run with it. As the business grows, one of the first hires is more support and SRE-type folks to keep things up and running.


What tool are you using to do all the orchestration? Did you build that yourself?

I built all that out ourselves. Docker was still a relatively new thing. Thankfully, DigitalOcean had a REST API for doing all of, “Start these machines or delete them.” After that, it’s pretty much simple like, “Shell on the box. Run these scripts to get everything started. Clean them up and then you’re done at that point.” The only thing that I didn’t build is what’s called Amplify. There’s an engine X SaaS product that watches your engine X machines, checks on their health and that server thing.

We have that that runs on our 4 or 5 big load balancers and some monitoring on DigitalOcean for core infrastructure when memory or CPU gets out of hand. There is a lot to be said for vetting well and making sure things are ready to go before you push them out. You’ll have a lot less heartache after the fact. We’re probably a little slower to get new stuff out because I’m pretty cautious when things go out. I want to make sure they’re ready to go so we don’t have to be up in the middle of the night trying to debug something out of nowhere. That’s about the source of it.


Who do you talk to when you want to bounce an idea off someone? I remember when we started a software agency years ago and we ran it for three years. It was the business guy and me. The thing that drove me mad was not having anyone to talk to. When we made our first hire, I freaking shot talked to him. What do you think about this idea that I came up with years ago? How do you deal with that problem?

That’s a tough one.


Is it just me?

I don’t think so. Thankfully, I had a full-time job with a lot of engineers so I would have water-cooler chats bouncing off like, “What do you think of this and that?” Part of the reason is I didn’t have to think about it so much that it was such a new thing. People were telling us, “You need to do or should do this because it would help us out a lot.” It was mostly a listening exercise for me and deciding on, “I got 3 or 5 people telling me this. I should probably go build that instead of this other thing.” The other part of it too was you get that idea of something that you want to do or think should be built.

There are phases of it or thresholds where it’s like, “This might be cool. Maybe I should go talk to people and see what they think. I like this idea. I’m going to talk to one person. If they say yes, I’m going to go build it.” There’s the cream of the crop where it’s like, “Why doesn’t this exist? I’m working on it .” It was so new. The remote debugging session feature was like, “Why the hell does this exist? I want it now. I’m going to go build it. I’m not even going to talk to anybody because I will use it right away when it’s there.” A lot of it was the latter. It was a new thing.

There were a ton of the world is your oyster problems. They were all interesting to me and I was very self-motivated because of those. There are some of them getting out of the weeds like, “If I am standing up a new server automatically and it fails at this step, what should I do?” It’s like, “We could tear down the server, try the last step again and retry it.” I don’t know which ones are right and more successful. Those problems do keep you up at night. It’s like, “I did this thing. I’m not sure if it was the right thing. If I could talk to somebody else, they probably have a better piece of historical data they could reference and say, “Don’t do that. Do it this way because when we were at X, Y and Z, that didn’t work out for us.” You got to keep it closed circle somehow.


It’s both ways. I have long debates with some of the guys I work with about it. It takes up quite a lot of time. “Should we solve this technical problem this way or this way?” There’s value in that discussion but sometimes I feel like there’s not. I’m sure they feel there’s not either but we’re both being pigheaded and going, “My solution’s better.” There are benefits. The problem is you’re racked with worry. You never know this small, seemingly insignificant decision that I’m about to make is going to come up in three years and generate 1,000 hours of work for me.

I have an example of that one. As far as outages go, I can only count 2 or 3 in the last few years that were particularly bad. It’s been some time since we’ve had a major outage. The worst one was one of those quick decisions. I was working on something in the core engine that was dealing with a socket connection. One of the web socket connections comes in. It’s a socket connection and no JSF first. This is usage-based. If they didn’t have enough time and their credits for the time have expired, we drop that connection and send what looks like a 500 back. “We can’t run your session. We’re closing it.”

On my tenth-year anniversary with my wife, we were on a remote island. It was Bora Bora. It had a very crummy connection. I woke up one morning and it was a stream of texts and messages, “Something’s not working in usage base.” There were lines and lines of it. I get up, try to get on the Internet and find out that their internet is not working so I did the cell phone thing and use data to do it. It was ten minutes of, “Cell phone provider, can I have more internet here please?” Once that was done, I was like, “I’m going to relaunch that entire fleet of servers because that’s probably what the problem is. Something got stuck and I need to figure out what it is later but I seem to get things back working.”

I did that. It’s an error race coming back up again, on and on. I was going through all these exercises of it could be this or this. It turns out it was the simplest thing. Engine X, when it sees a 500 comeback from a pool of servers, it puts that server in timeout. It’s like, “You gave me a bad request. Clearly, something is broken in you and we’re going to wait to send you traffic.” One person was sending a bunch of requests that they didn’t have time for on the surface to run. All of our servers are saying, “Nope, can’t run that.” They all got put in time out and then engine X is like, “Error 502, there’s no gateway. We can’t do anything.”

That was a painful lesson. You should always write back some semantic response, even if it’s a node, a socket connection because you don’t know what’s downstream from you or handling it up at a higher level and what they’re going to do with that response. If it’s in the 500 range, they might never send you traffic again and that’s on you to figure out. It took about 3 to 4 hours, which in outage territory was pretty bad. For some of those users, I was moving onto dedicated accounts because that runs on all different infrastructure and they’re often running from there. People that were critically impacted were able to get back in shape quickly. Those are my favorite kinds of stories because you always learn something wildly stupendous from there. It was wildly stupid.


It’s always the seemingly benign things, though not always, the thing that you would never consider. You think the ones that you need to keep an eye on for the next ten minutes and these are the ones where you’re like, “That’s fine.” In my experience, it’s those ones that can be the most painful. 

As a single person or indie hacker, small bootstrapped, those people always move fast and break things but the only person that handles the work is you. Be diligent. I don’t release on Fridays anymore or after 2:00 PM. Somebody is in a hurt place to get it. We figure something out on the side for them to get them running. I do not make core, big changes without being around for at least 24 hours after the fact. Devil is in the details. Engine X was behaving the way it does out of the box and doing what it should be doing. As somebody who’s used engine X quite a bit, I didn’t even think about the fact that if those are all 500, nothing’s going to be up. You learn a lot. Try to remember it the next time so you don’t have to live that life experience again.


Have you got any major new tentpole features that you’re working on or you got in your head? You might not tell me because someone might steal the idea.

We were the first ones to do the thing. That part is always going to be there. It’s getting parody with some of these other use cases. Scraping is a big one and we don’t have a proxy service built into the hosted service yet. What we’re working on is getting a good proxy product built into it, which is a perilous thing to try to do because so many of the proxy services out there are filled with bad actors and dark patterns. Even vetting and trying to find somebody that’s worth a while is tough. That’s a big one. I’ve always wanted to do a no-code approach. We have a pretty good proof of concept where you can even go to our website and say, “I want to start a script.”

Right in that same website, no extensions, nothing to download, nothing to sign up for whatever, you get, “Here’s a blank web browser. You tell us what you want. We record everything that’s going on and assemble a little script for you.” Low-code no-code is a big one for me. That has its own challenges with the site changes over time. Getting that right is going to be key to making sure it’s successful. That one’s probably a little way out but that’s the one I’m excited for because it’s fascinating.

Squeezing every ounce of performance out of Chrome, there’s a website that tries to catalog all the flags that you can launch Chrome with and it is in the thousands. That site is not even comprehensive. It’s always fun when somebody pops up and says, “We found that launching with this flag makes you get to a little bit more performance.” Those little nerdy performance aspects are always fun. They were always looking at those and trying to find a good balance.


I opened that flags page for the first time in seemingly years and I was terrified. I had no idea. You suddenly see the scale of that application right there.

It is a beast and it’s one person trying to document them. I’m not even sure if they’re internal to Google or not. It is a page to be reckoned with for sure. On the flip side of it too, compiling Chrome, there’s a whole bunch of flags you can compile Chrome with that, reduce the size, make it faster in some regards or do dev builds. There’s a big matrix of, “If you build it with this and run with this flag, you get this outcome.” Nobody knows what they are. It’s such nebulous and always changing. To be honest, I don’t even know if Google has a good grasp on flag combination and build combination arguments and what’s going to happen. Making more sense of that would be awesome but it is perilous.


Have you got contacts with Google? Do they ever talk to you about it? Do you ever talk to them?

Paul Irish has reached out a few times. He and I had talked a couple of times early on when I was writing the value of this before Puppeteer. He had harder to tweak than I had in writing this library. Addy Osmani reached out a few times to say, “There’s this thing coming in that might affect you,” which is cool. It warms my heart because they were the people that I looked up to getting started as a web developer. On the flip side of that though, you get an email from Paul Irish. He says, “Your site doesn’t have a good 404 page. You should fix that.”

“I got called out by Paul Irish. I should probably do my due diligence on these little things.” There’s so much to build like the 404 pages on my website, not the top of the priority and almost all the time. I want to make sure the account page works, session’s work and the database is up. There’s a lot. The flip side of it is getting called out by the big guns in the industry.


In terms of the open-source Docker container is that something that has its own community as well?

We do have a Slack channel and that’s dependent. It can be pretty quiet, which is a good sign. People aren’t having problems with it. People also post projects that they’ve worked on with it. There have been a couple of cool things. When the pandemic started, a guy had built a cool data aggregation utility with Browserless to that. That was cool to see that. The Docker hub itself, we’re at hundreds of millions of downloads for the Docker container. It’s pretty intense. I was shocked. It’s out there. Quite a bit of people use it, which is cool.


Do you have any problems in terms of licensing it with that binary or not?

That’s a tricky one. We’ve tried to be super permissive on licensing so there’s probably a good number of commercial issues that we should be paying for. We don’t do callbacks home to see if they’re licensed to be doing what they’re doing or anything like that.


In terms of the actual application itself.

Chrome is licensed under Apache or some other permissive license. That was a big concern originally. It was like, “I wonder if I can do what I’m supposed to be doing with this.” The feedback was like, “You’re good. If you show a picture of the Chrome logo, don’t mess with it. Keep it that way. It’s supposed to keep its visual identity.” I was like, “I can live with that.” Even on that side, all the open-source package licensing is convoluted and tough. If you’re going to be reselling it, you got to make sure that all the software that you use is licensed appropriately to do that. That for sure, I keep track and watch. I’m surprised there isn’t a utility or an app out there that helps you navigate all that because the licensing is only that. With that software package, what it’s using internally, you don’t have a license that’s not compatible. There’s this huge license tree that you got to figure out and see if it’s compatible.


That sounds like a nightmare. For products like Flagsmith, everything is LGPL all the way down or something like that. You’re bundling fonts that people have designed. There are maybe some blobs in a compilation thing. That’s horrific.

It is tricky. The biggest one is emojis. The cream of the crop is the Apple emoji images. Everybody loves those. You see those all over the place but those are heavily locked down license-wise. There’s a way to get it to run in a Chrome environment for Headless Chrome but we’ve never done it because licensing is very scary for something like that. We’ve gone with Twitter that has an open-source emoji font library that we use. There are a lot of Google Noto font sets that are very permissively licensed. We use those as well for other languages.


We’ve reached the most esoteric licensing point in the show with emoji. I’m wondering whether we’ll ever get more esoteric than what the emojis are licensed under. You have to think about this stuff when it’s your livelihood.

Writing it is one thing then charging people for it with licensing but reselling it is a whole other matter. You don’t know until a judge rules on it if you’re compliant or not necessarily. That’s a tough one. That’s a whole other can of worms to get into at some point but licensing is always fun.


Joel Griffith, thanks so much for your time. I’m going to end on licensing emojis because I got to get downhill from there. It’s super interesting to hear that story, especially from a technical point of view. I feel that you’ve managed to achieve so much in what is an unusual space, which is very multidisciplinary.

Thank you so much. I love being on the show. I appreciate you having me. For anybody attacking on it, keep trying. The biggest advice I give is to keep trying. If the first idea fails, move on to the next one. This is ID number probably 15 to 20 for me that finally got traction somewhere. It is like the lottery. You got to keep playing.


Thanks, Joel. Good luck.

Joel Griffith

Several years ago, I was an engineer working on PDF print functionality for my employer. This capability allowed our users’ dashboards to be “PDF-ified,” and then sent out to their teams and customers.

This was a very important feature! Fortunately, we were able to do this ourselves rather quickly, using headless chrome inside of our application. The solution itself was easy to get started, but supporting it wasn’t so great: I’d estimate that my colleagues and I spent roughly half our time helping our users troubleshooting how to get headless chrome up and running.

If we weren’t doing that, then there’s a good chance we were likely patching issues and addressing vulnerabilities inside of these browsers. As little as 5-10% of our time was spent on new features and delivering business value. Over time I found out we weren’t alone in this.

It was this reason that prompted me to start

I had seen enough issues, wrangled with ongoing maintenance, and lost enough feature time that I decided that someone needed to fix these inherent issues with headless browser workflows.

Browserless is here so you don’t have to pay a team of engineers to build, deploy, and maintain these systems.

Available for talk, coaching and workshops on:


Learn more about CI/CD, AB Testing and all that great stuff

We'll keep you up to date with the latest Flagsmith news.
Must be a valid email
Illustration Letter