Interview with Joe Duffy, CEO/Founder at Pulumi
Ben Rometsch
February 20, 2024
Ben Rometsch - Flagsmith
Ben Rometch
Host Interview
Host Interview

Joe Duffy
CEO/Founder at Pulumi
Joe Duffy
CEO/Founder at Pulumi

Join us in this episode, where we take a deep dive into the story of Pulumi with its CEO and Founder, Joe Duffy. From its inception in 2016 with the ambitious goal of simplifying cloud infrastructure management through the creation of a new programming language, Joe takes us on a journey into the team’s evolution that helped Pulumi become what it is today. He talks about the intricacies of Pulumi Cloud, insights product, and the strategic alignment of Pulumi’s business model with the community's success. Get ready for a candid conversation as Joe shares his insights on overcoming challenges, chasing future goals, and dreaming big about an open ecosystem in cloud infrastructure management. Tune in now and discover how Pulumi is shaping the future of cloud development!


I have Joe Duffy from Pulumi. Welcome, Joe.

Thanks for having me, Ben.

Before we started recording, I was saying, “Joe and Pulumi are one of the few guests we've had on the show that we are customers of. We’re great fans of the project and the platform that you guys provide. It keeps Flagsmith running, hopefully, green as much as possible. Do you want to give us a bit of a background on yourself and the idea of what is now the Pulumi organization?

I'll give you the abbreviated version and if you want to jump into any details, I’m happy to do so. I love hearing that you're a Pulumi customer. I did not know that until joining this call. Shame on me.

It must happen relatively often.

We now have over 2,000 customers, which is humbling. I started in open source many years ago. Several years ago, is when I started in open source, which is hard to believe. I spent a lot of time at Microsoft working on developer platforms. I saw an opportunity to take a step back and reimagine how we built software in a cloud-first era. That's what led to Pulumi. All roads led to infrastructure as codes. We started with an infrastructure code open-source project called Pulumi. It's up on GitHub. It’s popular. We've built out over time a SaaS product that helps teams adopt infrastructure as code at scale.

As we work with some of the largest enterprises in the world, there are a lot of enterprise capabilities, but even for small teams, being able to collaborate on your cloud infrastructure, is similar to you do how you work and collaborate on code with GitHub. Pulumi’s SaaS product helps you get t and keep things right and ship code to production. We've been at for several years with over 2,000 customers. We do have an open-source component that's core to everything that we do and the commercial piece.

What was the itch that you scratched? Was there a moment where you were like, “This is clear in my mind that we have to go and do this because solution X or Y is painful and doesn't work for us?”

I mentioned all roads led back to the infrastructure code. We started creating a new programming language. The problem that we're trying to solve is that the cloud was difficult. I came from a developer background. My life was about getting developers to be as productive as possible every single day. Great tools like Visual Studio, programming frameworks like .NET, and languages like C#.

The cloud to me was an exciting shift. It's the shift to true distributed computing. The a-ha moment for me was the first time I used Docker. I felt like it brought the cloud to my desktop where I could think of building blocks to build distributed applications, but my customers didn't think of it that way, and the developers of the world. They’re like, “I let my infrastructure team take care of the messy details there.” I would talk to the infrastructure teams and they're like, “I'm toiling away with crummy tools. Nobody thought to gimme the care and love that you all have given developers.”

Meanwhile, they were siloed and everybody was frustrated because they were filing tickets to work together. They weren't collaborating on code. Infrastructure's code is that missing piece where that makes the cloud programmable and composable and brings the world of these amazing developer tools to the space of cloud infrastructure. Most tools were recreating the wheel. You look at some of the DSLs out there, mountains of YAML, and piles of Bash. People were approaching it from a different perspective. Our unique angle was, “Let's learn from decades of building great developer tools and apply that to infrastructure code.”

Let's learn from decades of building great developer tools and apply that to infrastructure.

For those readers who don’t know about your product, and forgive me if I've got this wrong, one of the core differences between Pulumi and the other folk out there at the time was that you can write these declarations in languages don't make you want to pull your hair out. It’s actual languages like JavaScript or Python.

You asked about the a-ha moment. The first customer we worked with came to us and they were like, “We've got 25,000 lines of YAML. The DevOps guy who wrote it quit. We have no idea what in the heck it does. Everything is at a standstill.” We were able to rewrite that to 1,000 lines of JavaScript so that the entire developer team could understand the change. 25,000 to 1,000 is not a small difference.

Even if they were the same lines of code long, I'm sure the JavaScript implementation would've been more readable. YAML is properly typing. It's one of those things that I feel like I should sit down and learn properly one day, but I never get around to it. I'm like, “Is this format?”  The spec is however many hundreds of pages long, which I always find astounding.

It’s like what Bullet Train was before we existed as a commercial organization. We were hacking around. We had a docu box and a couple of other things. When people started using the thing in anger and putting production workloads through it, we realized we had to put our big boy trousers on. We should use Terraform because that's what everyone else is using.

I can't remember who told me. Someone mentioned it, and this was several years ago. We were established but not. It was like, “Why would we use this over the industry standard for companies like ours at the time.” Within ten minutes, I was like, “We're going to use this because it's not YAML.” By nature of being able to use these expressive languages, the amount of code, and the lines of code, almost all of your customers collapse that number. That ratio must be similar for your other customers as well to the one that you mentioned.

Languages were invented to tame complexity with things like abstraction, classes, functions, and reusability. You're not copying and pasting the same things over and over again. You mentioned YAML. YAML alone does have complicated features like anchors and things that I don't fully understand. I'll have to go read that 100-page spec.

To make matters worse, because these DSLs and YAML don't have conditionals, ifs, loops, or various things or variables, every one of these has had to invent its own unique way of doing things. If you look at a cloud formation or a helm, they have many DSLs embedded within them. It's not even learning the YAML. It's learning the ins and outs of everybody's specific DSL. With that, it's still not as powerful as a programming language.

I worked in the programming language community for many years. I managed the SQL Plus team, C#, F#, and Visual Basic at Microsoft. We used to have this saying that domain-specific languages are always destined to grow into poorly designed general-purpose programming because you start out thinking the world is simple. As you encounter real-world use cases, you're forced to bolt on things that general-purpose languages have. You’re like, “I'll put a little quasi-for-loop thingy over here. I'll do some reusability over here.” You've seen that almost every example of an infrastructure DSL has gone that path.

Pulumi: Domain-specific languages are always destined to grow up into poorly designed general purpose programs.

It's YAML all the way down. It's ironic. The CEO of Pulumi doesn't understand YAML or feels like they don't understand YAML. It's amazing because it's dominant for describing things like that. The thing that I thought was amazing was when I did that ten-minute and saw if I could get something. I stood up in ten minutes. I'll give my AWS account key and see if it works.

My background is VB 6, which I have a fond memory of. Java and Python are in a non-asynchronous world. There's this cool asynchronous declarative engine that's running underneath that's doing all of the complicated bits of this. These needs have a dependency on this and it needs to wait for an ID back from AWS or GCP for that. Was that what you started writing first?

There are also libraries of things like Terraform that you can potentially attach to. I'm curious to know. What was the sequencing of decisions, what libraries took use of, and how did you figure out whether from a licensing point of view that was something that was sustainable and long-term for the business? Was the engine the thing that you needed to build?

Yes and no. I left Microsoft in September 2016. We knew what we were building until March 2017. Even then, we didn't open-source it until 2018. It took us a while to get the basics. The hardest part was that our CTO showed up and started hacking in my basement, which is where our first office was. He was working on a new programming language. We were looking at that, but we’re like, “Does the world need another programming language? If we could figure out how to marry this declarative infrastructure with the best programming languages in the industry, we didn't need the new language.”

The hard part was figuring out, “Is it going to be multiple languages? Are we going to bet on one like JavaScript?” That is still controversial within the company. I'm glad that we did force us to invest in a lot of different things that we can talk about. We built .NET and CALM many years ago, which were multi-language technologies. Underpinning VB 6 was CALM. That was the ability to have components written in C++ that you could consume from VB 6.

Some things were complicated back then, but some things were amazingly simple and powerful. You could do that we. I was writing VB 6 six and those objects in an ISP blew people's minds back then because, before that, it was PERL.

Our biggest customers write their main components in Go and they have people consuming them in JavaScript and Python. The trick was figuring out the runtime boundary with the languages. We came up with the idea that the engine was going to be this written and then we're going to host the language runtimes.

We did not start there. I started by forking the V8 engine, which is the Google Chrome JavaScript engine, which is a powerful open source. I was able to start there. I worked on promises. What I worked on in the 2000s was, “Multicore is here. Everybody is freaking out. What are we going to do about it?” We invented Async and all these mechanisms. Pulumi is at the core. It has to track dependencies and orchestrate these asynchronously. There are a lot of similarities between that and promises.

The initial approach to Pulumi was jamming promises into the heart of V8. You could perform operations. It would track dependencies and perform these things asynchronously. If you've ever looked at the V8 code base, it is gnarly

Starting a business by forking V8 sound that sets the tone for where you are at.

I wish we were not capable of such things because we probably would've said, “This sounds crazy. Let's not do it.” It’s ten million lines of modern C++. That was a two-month spike to prototype the idea to see if the idea could work. We raised our seed round based on that initial spike. It was crazy. We knew it wasn't going to last, but we allowed it to prove the idea. They are getting the hello world of Pulumi, which is spinning up an EC2 instance with a security group. We were able to get that up and running.

We had the a-ha moment, which was, “Now that we've done this, we understand the boundary between what has to be in the engine that can be shared between all the languages and what can be hosted in a language runtime.” That's using the native runtime, not a fork. It's using the normal no JS runtime or the normal Python runtime. At that point, it made Pulumi entirely language-agnostic. There are the cloud resources components that you mentioned. There's a separate journey for that one, but at least, that's what the engine and runtime story was.

Pulumi: We understand the boundary between what has to be in the engine that can be shared between all the languages and what can be hosted in a language runtime.

Can you talk about the relationship with other projects? The most notable is Terraform. I've never quite understood. When I started using Pulumi, I realized that you'd search for Google for something and slowly, the pain dropped the API surface area that was in Pulumi’s runtime. If you hit a bug or you couldn't figure it out, we always resolve back to the Terraform documentation. What is the relationship with those library aspects of the Terraform platform?

The engine itself does not depend on anything to do with Terraform. You think of the engine as extensible in two ways. One is the language runtime. They can be extended. If you want to write Haskell runtime or Rust runtime, you could go do that. It wouldn't take a lot of effort. The second dimension is what we call resource providers and those are cloud resources.

You spin up some security groups, open port 22, and create an EC2 instance, but you're describing to the Pulumi engine, that's the infrastructure you want. It has to go issue, create, read, update, and delete operations in some cloud, in this case, AWS but any others, like, Azure, Google Cloud, and Kubernetes to carry out the operations that accomplish the desired state that you've declared. The question is, how do you do that? We started by implementing our own. For every single resource in AWS, we had to create updates and we're happily chugging along. They're launching services faster than you can write them.

I'm happily chugging along like writing as much code. I felt like I was scaling a mountain. I'm like, “I can get to the top. It's there, I can see it.” Luke came back to my office. One day, he came in with a spreadsheet. Joe was like, “I did some calculations. At our current pace, it's going to take us over five years to finish the AWS provider.” I was like, “We need to take a different approach.”

We had experience in the past with other projects where you think of ADO, RDO, and all these like database adapters that Microsoft shipped, those were bridges to other drivers. We've had the idea these things are drivers. It's not a big piece of software. It's the schema for the resources and creates, reads, updates, and deletes operations for those resources. It has nothing to do with Terraform. It happens somebody implemented these providers for consuming them with Terraform.

We can create a bridge into the Pulumi schema and leverage any Terraform provider out there in the world. For other providers, like our Kubernetes provider, we didn't have to go implement that. That bridges over to the Kubernetes implementation itself. That was a place where constraints bred creativity.

Correct me if I'm wrong, but Amazon or AWS published it. You could have done a similar thing directly with the AWS scheme of definition. You would've been having to do that for every resource provider out there.

It turns out Amazon didn't quite have what we needed to automate all of it. They published their schema but their APIs were not generic CRUD APIs. You'd have to go map all the schema to random APIs in AWS like create instance smart. They're all peculiarly different. Amazon, several years ago, launched the cloud control API, which enabled us to do what you're talking about. We now offer that as an option in addition to the Terraform back provider.

For us, the simple answer is we've got native providers, Terraform back providers, and all sorts of different providers out there. In a sense, you shouldn't have to care as much. In some cases, the implementation details bleed through and they used to bleed through a lot more, which is why you had to look at the Terraform documentation. We’re better these days about offering up great.

This was when we were getting up and running. I wanted to build an ACS REST API service but I wanted to do it in Pulumi. We do that for our private cloud customers repeatedly. Do you always use or default to those Terraform mapping resources? Do you sometimes pick and choose ChatGPT, build a service like that, and you are like, “They've got this, which would be easier or more valuable to consume rather than going an intermediary broker definition?”

It is pick and choose. It depends on the provider. We advise, “Here's the recommended path for AWS or Azure.” Our default provider is not Terraform-based. Azure has great open API specs for the entirety of their cloud surface area. We were able to create a native provider for Azure that had 100% coverage of the Azure API, whereas the Terraform provider is manually implemented. You often find some service that Microsoft shifted a year ago. It is not yet supported or maybe if it is, it's missing some corner case features. Our native providers are better experienced. We recommend that.

We have all the options. We have over 150 different providers. For some folks that are getting started, vendors like Pine Cone will build a native provider to start. It is a bit of a mix-and-match situation. Our ideal is eventually these things should not be Terraform or Pulumi specific. We envision an open ecosystem of what a cloud provider is. The schema for cloud resources and the CRUD operations, Imagine if that was an open standard. That would be great for Pulumi, Terraform, infrastructure cost providers, vendors, and anybody who wants to do something with cloud resources. It happens Terraform set the standard early on.

At Pulumi, we envision an open ecosystem of what a cloud provider is.

I remember when I was implementing it and you had this project called Crosswalk. At the time, it looks like it's still JavaScript only. Correct me if this is a gross overgeneralization, but crosswalks like an opinionated reduced code for you want to do something like I wanted to do set up, give it some a docker image, push that, and have that show up in ECS with a scaling group. How much do people say, “Give me the base tools, I want to go and use those and I'll cut myself and shoot myself in the foot with a thing?” I was like, “This is great. We've got a PostgreSQL database and REST API. We want to go nuts with that.” This must be 90% of what people are using ECS for.

Crosswalk now is multi-language. It started as JavaScript only but we created effectively these CALM things that I mentioned, the multi-language components. You can write in one language and consume anywhere. Crosswalk is now one of those things. You can consume it in any language. We do see most of our customers using AWS. We’ll use some crosswalks.

It's common for people to want to do high-level abstractions, which cuts down on boilerplate and avoids recreating the wheel. How many times have you had to configure a VPC the same old way, go back to the documentation and remember, “Here's how I do this.” It's a waste of time. Why not have 10 or 3 lines of code that does the right thing? A lot of people want to mix that with the low-level components where they have full control over it. That's part of the magic.

Going forward, we're investing in a lot more of those abstractions because we see customers all the time doing the same thing over and over again and getting it wrong in the same ways. Where we want to get to is an even higher level than that. I've got a containerized microservice, a pub subtopic, and MySQL data. I described it to you in two sentences.

I knew exactly what you meant.

Write that infrastructure as code because mine is a service in AWS. We launched an AI assistant where you can describe what I said. It will give you all the code, and it works.

In terms of the commercialization path and trajectory of the company, HashiCorp has been tremendously successful. Mitchell was one of the first guests on the show. He was interesting. People in our industry instinctively understand that there's a lot of value-creation potential here. You mentioned Docker. They haven't been successful commercially, but history is lit with companies like this that didn't quite get to the escape velocity that people assumed was implicit within their business, project, or community. How conscious were you? Were you quite forceful of that and see where things were working and where things were?

We’re intentional. Part of why it took a while to launch was this point. My co-founder, Eric, would remind me every time I start to get impatient, want to hit, and make the repo public if I launch that. He’s like, “Let's wait until we figure out how we're going to make money first.” I talked to a lot of the folks you mentioned, the successful ones and the ones who struggled. I tried to learn as much as we could about what worked and what didn't work in terms of commercializing open source.

The key insight I had was you need to offer something of value that people want to pay for. You can't trick people into paying for something free. Open source, by its nature, is meant to be free and you want it to be free because otherwise, why are you doing open source? The trick is finding the balance between the two.

You need to offer something at value that people want to pay for. You can't trick people into paying for something that's free and open source that, by its very nature, meant to be free.

The way we approached it is slightly different than most other people. We're going to think about open source as an independent piece of software rather than the thing we're going to sell. We're not going to do open core or hold things back. We're going to say, “This thing is fully open source.” What we're going to offer is Git versus GitHub. Hub itself is not open source, but it works great with Git. It's one of the easiest ways to go and use Git in a team.

You can go host your own repos, figure out the security, and back them up. How are you going to do collaboration? What is the equivalent of polar requests even in that world? You can go that path if you want fully open source and Git was designed to support the scenarios. GitHub is easy and the collaboration and the reliability of security. We took a similar approach.

Pulumi Cloud is a complement to the open source that we offer. If you want to use Pulum fully open source and offline, go for it. It's designed to be used in that way, but it's much easier, especially at scale to use the Pulumi Cloud. When we launched, we had the Pulumi Cloud. It is a barebones-based. It was a state store at the time. It's grown up over time with a lot of great capabilities.

It's interesting because for us, that state for our production infrastructure is important to us. We are like, “That's not like the last thing in the world I want to be having to look after myself.” In terms of the licensing for the project, it's generous. What people would argue is the valuable IP if your business is completely free and open. How certain or sure were you that was the right call?

I had the benefit of running the open-source internal strategy and policy at Microsoft in all places. During the time when Satya was taking over and pushing the company towards more open source, I was responsible for putting together the team that open-source .NET and C# before I left. As part of that, one of my things I worked closely with Microsoft Legal to understand the ins and outs of MIT versus Apache 2 versus copyleft. I had a lot of experience picking and choosing licenses. We picked Apache 2.

We didn't want to hold anything back. Our genuine purpose was to make this thing as open as possible and bet on our belief that we could also offer a commercial piece of software that people would want to purchase because it genuinely adds value beyond the open source. That's different than a lot of the folks with this, the new business source licensing approaches. We believe strongly in it. It was down to MIT and Apache 2. The cloud-native ecosystem with Kubernetes and all these tools bet on Apache 2. I'm a big Apache Foundation fan.

What broke the tie was that this is the cloud-native license that everybody's rallying around. If you pick the same license, these enterprises, have long conversations and internal policies about what licenses they're going to allow people to contribute to. We figured that aligning with Kubernetes was going to be a safe path for us.

I'm on the governor's board for the open feature project. Political is the wrong word, but projects, the successful and the growing ones are liberally licensed. It helps because, with the open feature, there are employees from large companies like Google and Spotify. Those companies are graciously donating their time and things of that nature.

In the context of the BSL changes that HashiCorp announced, one of the things that I feel like it's missing is this idea. It's a shame that this concept of a rug pull is now associated with crypto where people lose a ton of money and disappear with hundreds of millions of dollars worth of Ethereum tokens. I wonder if you've given consideration to being able to tell or give your customers and users a way of structuring the organization in such a way that isn't on the part because it is such a sensitive subject. I wonder whether your alignment of goals changes.

That's unarguable and that's had something to do with it. I worry that we've seen this happen repeatedly over and over again with large successful projects. I don't want to beat down on it. People are not making these decisions because they're sitting in an ivory tower and counting more money. I wonder if there's a solution that's yet to be found for entities to somehow provide some level of guarantees.

We talked about this before. We started recording about people who will fork those projects. Those organizations may live or die depending on the fork. I wonder how you guys have thought about that because you've already set out your stall in terms of all that code's out there. I can go and fork it right now and that would be that.

The fork is always an unfortunate outcome. In a sense, it's great. It's the beauty of open source and yet it indicates a fundamental disagreement between the current steward’s project and the community. When it comes to that, things have gotten to a bad place. A lot of times, these forks succeed and some fail, but they always fragment the community. We're seeing that in practice with Terraform and Open Tofu. There's an Open Owl which is a version of Vault, which is open.

The good news is a few things. One, we are genuinely aligned with the success of the community and the success of the open-source project because the business model we've chosen is different. The situation HashiCorp got into was when they started the company, those guys are brilliant. They've built some of the industry's best and most amazing pieces of software. My perception is not that they thought deeply about the business when they were getting started. They wanted to create the most amazing open-source tech the world has ever seen. They did, but they realized, “We need to make money. How do we do that?” We were thoughtful about that foundry.

Pulumi: We are genuinely aligned with the success of the community and the success of the open source project.

The interests of the community are aligned with the interests of Pulumi in a fundamental way. Is there an open Pulumi foundation, something that gets started eventually? Legally, that would be straightforward to do. The other thing that hopefully gives people some assurance is I was strongly opinionated here. Our lawyers were like, “You must have a contribution license agreement or DCO.” It assigns IP rights to the corporations. They can do the rug pull. We do not have a CLA. We do not use DS DCO. We are Apache 2 without any footnotes. At some point, you know, Hashi did add CS to all of the projects. That was the warning sign that something like this was around the corner.

It's interesting because people don't often talk explicitly about that and you've baked that from the start, but it's difficult. Some of your projects and businesses might be easier for you to come up with a solution like that than for other projects because it depends on what they're doing and where the perception of value lies. It's a shame that sometimes it's harder. Docker is a great example. How does that completely revolutionize the industry and effectively make money? We are a customer of theirs. We can set up or give access to download our enterprise images to our enterprise customers. That's the luck of where the cards fall depending on your idea.

Docker is great. It took them a while. They sold off part of the business and figured out a strong business model. They went close to zero to over $100 million in several years after they figured it out. They did take a heavy-handed approach. It was a big gamble. They take the approach of like, “If you're using Docker desktop and your company makes over $10 million, you have to pay us something.”

There's still Moby out there. I have not read every little detail of their license. I'm sure you can still build derivative works from Moby without paying the money, even if you're making over $10 million a year. They took something convenient and huge distribution in the industry standard. They did something that was not unreasonable. If you're making more than $10 million and you're finding all this value in their product, pay us money. They walked a perfectly fine line. That's a novel approach. Not many people have taken it.

What's next for Pulumi? You mentioned 150 providers, which is a much larger number than when I first got the binary running on my machine. What are you guys working on now for 2024?

We launched some exciting and early new products. One is Pulumi Insights, which is search analytics and AI over your infrastructure. This idea is that you've got all this infrastructure and you don't know where everything is or how it's working and the ability to interact with it with the natural language interface and not ask questions about it, but also eventually, in the future, be able to manipulate infrastructure. Do I have any untagged S3 buckets? Yes, you've got all these. Tag them for me. That's an exciting thing that we've gotten started on.

I mentioned the AI assistant. It is good. I often find the cloud has thousands of building blocks services and it's a matter of how I take those two sentence descriptions and encode them in the building blocks. It helps you with that. We launched a secrets management product that helps you do configuration and secrets at scale. Most people will say, “Is it like Vault?” It's not. It solves a subset of the problems that Vault does, but it solves the problem that we keep seeing. I've got all these secrets and configured variables. I've got hundreds of environments. I’m copying and pasting everywhere. I want to do things securely. Help me tame the sprawl. It helps with that.

The third major area is we're still programming at a low level. Where is the node JS of modern cloud programming? It's not that simple yet. We're still programming effectively at the level of assembly language. It's great that we have these amazing programming languages to do it in, but the concept counts. It's complicated. Making the cloud a lot simpler to consume and use as a developer and an infrastructure practitioner is the third key area for us.

That sounds like the hardest one. I had to look into Pulumi. We wanted to add permission to a new dynamo table. We got an error that came out. It’s the worst feeling in the world when the Pulumi action goes red. It's this virtually worst feeling as an engineer. It was like some subnet error. I was like, “Why am I doing this?” It's not your fault. Someone had created a thing and it created another subnet. Our code was looking at the subnets. I'm like, “What am I talking about? I don't care.”

An analogy I draw sometimes is that if you think of the cloud as the new operating system, we've moved beyond the single-machine, single-node operating system. You think of .NET, Java, Ruby, and Python. What do they do? They took operating systems and made them usable. All of those security model things, file systems, and networks gave you a highly programmable, usable surface area. That is yet to be done for the cloud. We think we can do that.

That's an audacious goal. I commend you on that. That's not something that I'd to be staring down the barrel. We'll get to the VB 6 of CCD. All these youngsters are going to be reading this and going, “That sounds terrible.” I swear to God. It was easy. It’s the most efficient and constructive thing that I ever had any experience dealing with. Look at Webpack configurations now and it's mate.

VB 6 is the gateway that got me into the Microsoft ecosystem. I was a Linux guy. I spark station on my desk. I did VB ^ for a client of mine. I was consulting and I was like, “This is magical.”

When I was at university, it was 1-bit Sonos terminals. They're early Linux. I had a friend who had Solaris on a PC. I started working commercially, but VB 6 was an incredibly productive platform. The design style of those applications was nostalgic. Joe, thank you so much for your time. It’s fascinating. Thank you for the project and your insight. It is been interesting.

Thanks, Ben. It’s an awesome conversation. I had a ton of fun. It feels like we can keep going forever.

Thanks again and good luck for 2024.

Thanks, Ben.

Important Links

Joe Duffy
Available for talk, coaching and workshops on:


Learn more about CI/CD, AB Testing and all that great stuff

We'll keep you up to date with the latest Flagsmith news.
Must be a valid email
Illustration Letter