Jenkins/WireMock

Interview with Oleg Nenashev: Governance Board/Core Maintainer, Jenkins
By
Ben Rometsch
on
January 9, 2024
Ben Rometsch - Flagsmith
Ben Rometch
Host Interview
Host Interview

Oleg Nenashev
Governance Board/Core Maintainer
Oleg Nenashev
Governance Board/Core Maintainer
00:00
/
00:00
https://feeds.podetize.com/ep/09eYmvPda/media
00:00
/
00:00
https://feeds.podetize.com/ep/09eYmvPda/media

Jenkins, an open-source stalwart and a foundational project of the Continuous Delivery Foundation, underwent a transformative journey towards independence within the foundation. In this conversation with Oleg Nenashev, Governance Board/Core Maintainer, we navigate through the significance of feature flags in continuous integration and delivery, culminating in excitement about the potential standardization through the open feature project. Shifting gears, the conversation delves into WireMock, a widely-used API mocking tool, exploring its capabilities, commercial extensions, and upcoming improvements in version 3. Oleg emphasizes collaboration and alignment across diverse languages and repositories. The conversation concludes by spotlighting Penpot, an open-source design platform, as a noteworthy project. Join in for a comprehensive exploration of Jenkins, feature flags, WireMock, and the broader open-source landscape.

---

In this episode, I have Oleg Nenashev with me, who has a glittering open source career, working across a bunch of super interesting open source projects. He’s got really interesting projects on the go at the moment in WireMock. Olev, welcome to the show.

Thanks a lot for inviting me. I am happy to chat about anything open source related community-wise or on the technical side. I am looking forward to the discussion.

As an introduction and an idea about your background, what was your very first experience with open source software?

My first experience is open source hardware because I started my career as a hardware and embedded engineer. I spent quite some time there. The first projects I started working on were the open risk, etc. After that, I started using developer tools, including compilers, etc. Finally, as I was gradually moving towards the Linux ecosystem, I started using more open source.

What was the hardware that you were working on?

A processor course and electronic design automation. It was quite a lot of stuff.

That's super interesting. What was it that made you make the switch to a software career?

I switched not exactly to a software career. I switched to a developer tooling career. When you work on hardware and software, the approaches are the same. You still need to build the software. You need to test the software. You need continuous integration tools and a lot of scripting. For me, quite early in my career, I realized that there is a lot of need for various kinds of automation and started doing more on the developer tool side.

I got introduced to Hudson in 2009. It was quite early in the cycle. We started using it for hardware projects and then software ones because we had a lot of embedded software. As it commonly happens in the industry, once you fix the pipeline, you own the pipeline. There was no even pipeline as a term, but the principle was the same. I ended up permitting my Hudson instances.

When I was working for Intel Labs and for Synopsys, I was maintaining bigger Jenkins infrastructures. At Synopsys, I was leading a team doing automation infrastructure for our department. This is the time when HDI got to contribute to open source too. Before, I was working for hardware companies, mostly international ones, but it was almost impossible to do anything in open source. With Synopsys, I got permission to do that so I started actively contributing to Jenkins.

At some point, we were running a really big instance with something like 200 or 300 engineers using it with dozens of products running on the instance. Finally, as it happens with open source, you need to submit some patches here and there. In our case, it was mostly because of security requirements. Also, I needed some features in the Jenkins tool. I started contributing. Eventually, people gave me permission because I was too annoying. I had too many open pull requests.

Finally, Kohsuke, the Founder of Jenkins, said, “Probably, we should give you the right technicians.” It was 2014. This is the time when I formally became a Jenkins maintainer. I've been participating in the project ever since. I really Jenkins as a community. I like Jenkins’ automation framework but probably not so much as a CI and CD tool. If you do it right, Jenkins is quite good for this purpose.

Jenkins has had some interesting moments in its history. Some people might not know that it started off its life as a project called Hudson. You don't hear it called that very often anymore. There was one of the more famous forks of a reasonably large project. Were you involved in the project when that went through?

No. I was using it quite actively, but I wasn't in the community. When you work for hardware companies, it's almost impossible to get permission to participate in open source. I was following all this renaming. For the record, Jenkins wasn't fork. The community renamed Hudson to Jenkins because Oracle wanted Hudson as a trademark. Formally, infrastructure-wise and governance-wise, Hudson is a fork of Hudson and Jenkins is a renaming. This is something many people do not know. It’s the actual state of affairs between these projects.

How come Oracle had the rights to that in the first place?

Hudson started at Sun Microsystems. Kohsuke was an engineer there. He needed something to automate the bills, including distributed ones. This is how Hudson was born. Management at Sun Microsystem saw value in Hudson so they started doing an enterprise offering around it. Around this time, Sun Microsystems was acquired by Oracle. Oracle inherited Hudson not just as an open source project but also as an aspiring product.

All the classic Oracle politics surfaced. There were a lot of restrictions. A lot of people left Oracle from Sun. The same happened with Hudson. Kohsuke left and other people who were involved with Hudson in Oracle left except one engineer who became a maintainer. The project continued outside of Oracle, but Oracle held the name at that point. Finally, there was a need to either rename the project in the community or agree to the fact that Oracle owns the trademark in its all potential limitations. The community decided to remain.

I remember quite a long time ago back when CI and CD didn't really exist as a concept. I was trying to build Java projects at the time. Hudson was the first tool that I used and I managed to get it working within a couple of hours. At the time, it didn't have a declarative YAML file or something. It was a lot of clicking around.

Hudson and Jenkins evolved quite a lot along with the ecosystem. Jenkins is a completely different Jenkins compared to what we had many years ago. In 2024, Jenkins will be celebrating twenty years since its inception. The project is quite old, but the applications of the project and also the way of using it have completely changed. Jenkins is rather fully configured as code with a lot of pipeline as code. These are things with a lot of developer capabilities with a high emphasis on containerization.

GitHub actions in Tekton, you rely on container steps. This is the approach we recommend in Jenkins. If you want, you can use it in the old way. It's mostly supported at the moment. If I were to create a new automation infrastructure with Jenkins, the approach would be completely different than what it was many years ago.

The other thing that I recall from a very early stage in the life of the project was that it had a healthy plugin system to the point where there were thousands of plugins. Do you think there were any design decisions or non-technical decisions that the project made that made it so popular in that regard?

Yes. There are two parts. Firstly, the plug-in development for Jenkins has been made super easy from the very beginning. They have no complex framework. There were no classes and not many other obstacles for users who wanted to start using Jenkins and developing for Jenkins. For example, I started contributing to Jenkins with zero experience in production Java code. I had some projects with the Java mobile edition, J Sharp, but ultimately, I came to Jenkins and started contributing.

From the non-technical side, Jenkins as a community has been always quite open and welcoming so there was little to no restriction for hosting the plugin. It's still a case. The results are there were not that many restrictions for users to step up as co-maintainers of plugins. The maintainers still remain in full control of the components.

Jenkins as a community has always been open and welcoming.

In Jenkins, while we have a governance model and a core maintenance team, when it comes to the plugin ecosystem, there is almost full freedom for the maintainers. It provides a lot of flexibility. Also, it allows everyone to start and contribute quite quickly. The downside of that is the various kinds of integration and compatibility problems Jenkins is famous for. For that, we created a lot of additional guardrails. Later, we created new developer tools specifically for plugin maintainers. That helped them to build more stable plugins. The ecosystem is perceived to be much more stable than it was several years ago.

How does the project deal with making and breaking changes to that API interface, or does that not happen anymore?

That happens from time to time. First of all, we have to do security fixes. Sometimes, security fixes have to be broken. They have to be to be delivered quite quickly. Also, there are a lot of things happening in Jenkins in terms of evolution. When I started with the project, there were a lot of things embedded in Jenkins in the core code base, including many various models. For example, self-management as a Windows service, self-management as a number of Linux daemons, and a lot of things specifically for the Solaris platforms because it started at Sun Microsystems. That is a plugin, for example, for OpenIndiana that takes all this quota from Solaris.

Initially, we started with a lot of code base. We will break down the components and split them. We couldn't do that without breaking changes. Some breaking changes were nominal. They could have been mitigated by documentation. Some breaking changes needed actual changes in the plugins or in the user pipelines and Jenkins has approached that.

Jenkins doesn't follow semantic versioning. Instead, we have feature deprecation policies. We announced feature deprecation. One of the most common scenarios is Java version support and support for various operating systems because we still depend on some native components in the code base. There, we say, “Now we don't support this version.” For example, we do not want to support Java 8 like it wasn’t presented. We announced it. We provide some developer tools. We provide some analytics for maintainers so that they know what happens in the instances. We still set an expectation that eventually, you migrate.

How it happens is we have a weekly release line, which is not weekly anymore. It's on demand release so we can trigger it as often as we need. We can do breaking changes quite quickly with little notice. Then, there is the LTS baseline, which is maintained for several months. This is the stable line where we have a lot of announcements, preparation for releases, migration guidelines, etc. For us, we had to find a compromise between moving fast and also providing compatibility. For the project of twenty years, it's essential to provide compatibility to take all these features. We have quite a good compromise at the moment.

You are also involved in Jenkins with regard to the Continuous Delivery Foundation as well. Can you describe a little bit for people like myself who don't know what that organization encompasses and what your role in it is?

Yes. Jenkins is one of the founding projects of the Continuous Delivery Foundation. We started talking about having a foundation for Jenkins maybe in 2015 or maybe a bit before. Historically, how it happened is Jenkins has been always independent as a project. At the same time, there were quite a lot of vendors. For example, CloudBees. That had a lot of influence over Jenkins. At some point, it was quite unhealthy for both the company and the project.

Jenkins Open Source: Jenkins is one of the founding projects of the Continuous Delivery Foundation.

The idea was to emphasize the independence of Jenkins by having part of the foundation, which would be vendor-neutral and would facilitate collaboration with other vendors, etc. There are a lot of good things that can happen for an open source project, but a corporate product manager won't always be happy about these things. Some separation was needed.

Later, we also had discussions with the Tekton Project, which was at Google at the same time. After that, we started a discussion with the Linux Foundation. The decision was that instead of the Jenkins Foundation, there would be a foundation that would unite a number of projects that want the same. There was some kind of foundation and cross-pollination between the project collaboration as it happens with other Linux foundations.

This is how CDF was born. There were initially four projects. There are around ten. The objective of the foundation is to facilitate collaboration within the areas of software delivery and all aspects including continuous integration, and continuous delivery but also software supply chain in many other areas. At the moment, the foundation includes 40 or 50 companies and 10 projects. There are a number of special interest groups. There is ongoing development for standards there. For example, there is a standard for CD events, which is an add-on on top of Cloud events that would be specifically about continuous deliver events or continuous integration events, too.

My role in the foundation initially was as a representative from the Jenkins project there. Later, I was elected to the technical oversight committee. From 2021, I believe, I was on the technical oversight committee. I was then elected to chair this committee and also to be a governance board member. I was in this role for one year and a half. I stepped down in December 2023 when I had a role to deliver on the responsibilities in this role.

I still remain a TOC member, but most likely, this is my last month because I believe that there should be a rotation for all public roles. In Jenkins, we have quite a lot of people representing the project on different levels. This 2023, as I suggested, we nominated another contributor and Jenkins governance board member, Marc Wade, to represent the Jenkins project. For me, naturally, it means that I will step down, but I will do my best to support the foundation. I still remain a CDF ambassador. The person who represents the foundation speaks about it and its projects, but I won't be on the TOC anymore.

We met through OpenFeature. I wanted to talk a little bit about OpenFeature. One of the things that we at Flagsmith spend a lot of time thinking about and one of the things that I feel is a bit of an unsolved problem in the area of the world that we are within is how feature flags involve themselves in the CI/CD process. I was curious to know in terms of Jenkins, for example. Have there ever been attempts, popular plugins, or something that can try and address that problem? In many ways, feature flagging is counterintuitive to this whole notion of configuration code, GitOps, and all this sort of stuff. What's your involvement been historically with feature flags in this space?

I was doing feature flags before it was mainstream. Even in hardware and FPCs, it's normal to have quite a lot of feature flags.

That’s really interesting. I didn't know that.

Sometimes, you need to do some experimental things. Sometimes, you have configurations. If you have dev kits or FPC boards, there are a lot of jumpers. These are two pieces of conductive metal that you can connect with each other and then something changes. For example, every QNC of your processor can change. Maybe you can enable another feature. You can enable a connector, etc.

In hardware, feature flags have always been there. When we talk about modern feature flags, it's completely different. Personally, feature flags as a concept are a natural part of software delivery, GitOps, etc. For me, a feature flag is a part of the value stream for developers. It’s very important if you want to speed up delivery and get feedback.

A lot of things we are talking about in DevOps like AB testing, cannery deployments, etc., in a nutshell, it's feature flags. Sometimes, it's not bullion, but, ultimately, we want to provide some configurable features. We want to have observability of what people use and how they use it. We want to also provide them with the ability to, for example, try out some better features and preview features. All of that is managed as feature flags.

At the moment, you cannot do anything without it. When the proposal about OpenFeature started floating around, I was super excited about that. I was with Dynatrace at the moment. I was really supportive of the idea. I joined Mike Beemer, Alois, and a few other people to try to kick off this effort. It is so far, so good, in my opinion.

In terms of using it with tools like Jenkins, have there ever been efforts or projects within Jenkins to try and integrate those in? It seems like an obvious thing to do. We've seen GitLab have feature flags. I know you have elements of that. Firebase had them a long time ago. How has the Jenkins project approached that?

Jenkins has feature flags on site. You can go to the page Jenkins properties, etc. In fact, in our ecosystem, we have several feature flags. Usually, they've applied the system properties. We have a mini framework that allows us to map them from configuration files, environment variables, and some from CLI, but ultimately, they're system properties that are defined in Java. Some system properties are runtime, so you can change them without reloading your Jenkins, for instance. Some of them are static so you have to reload for the change to take effect. Since Jenkins is a distributed system with agents and a lot of FPC going on, the reality of this feature flags is more complicated.

We had discussions about using an external framework for features in Jenkins quite a lot. For us, there are still some product use cases. For example, some Jenkins extension points operate on the class loading level. For example, I am a maintainer of jenkinsfile-runner, which is a standalone Jenkins engine. It uses feature flags to instruct Jenkins so that some particular bits of the core shouldn't be even loaded because they don't make sense in the environment of jenkinsfile-runner. For example, the whole front end. All of those are feature flags. This is what we would like to enhance.

The problem is that there is no exact solution to what we could do instead of our implementation. I really OpenFeature as a project, but, in my opinion, one thing that is ultimately missing there is a unified interface that one could use from a project standpoint. OpenFeature is organized, but there are multiple ISD cases. If you want to connect your product to a particular OpenFeature provider, you need to include this SDK in your project. It makes total sense for the vast majority of products, but it doesn't make much sense for open source projects because it immediately makes the project vendor-dependent.

Jenkins Open Source: One thing that is ultimately missing is a unified interface that one could use from a project standpoint.

For example, if there is Kubernetes, Kubernetes also has feature flags. What do they do if they want to integrate OpenFeature? They take Flagsmith, CloudBees, Dynatrace, GO Feature Flag, etc. providers and have to bundle them together. Instead of that, how I saw OpenFeature in the beginning and how I still see it is there should be a generic client that could connect to a feature flag provider, which is externalized. This SDK on the implementation site is open source and fully vendor-independent.

Once it happens, for me, it would be much more convenient to talk about including Jenkins and other open source projects. I understand the value of SDKs, but for me, for open source projects, it would be a table stake to have a vendor-independent client library they could include into their code base so that they could distribute it. All the users of open source projects decide what feature provider they take.

That's a super interesting answer. You’ve been thinking about this a lot.

Maybe I'll even hack something. I don't know. I plan to return to OpenFeature. For some time, I was away, but it was beyond my control. I'm about to return to the project. I plan to do a demo. There will be a Cloud-native meetup in Basel. There, I will be presenting mocking of OpenFeature with WireMock and Testcontainers for test purposes. This is when I started implementing and realizing that I still cannot use it as is for many cases, so I will have to come up with something different.

You mentioned WireMock there. Let's talk about that. That's the project you're working on at the moment, right?

Yes. In April 2023, I joined WireMock. WireMock is a really small startup. I am employee number seven. If you have ever used Java, you might have heard about WireMock because it's one of the most popular API mocking tools. I used it since 2014 or so. The tool is almost ten years old. It's very popular. There are more than five million downloads per month from Docker and Maven Central. Nobody really can say what downloading from Maven Central means in terms of all these various Maven bills, etc. I can provide you with a number. For me, I'm helping to build an ecosystem of the community. Are you familiar with Testcontainers?

If you have ever used Java, you might have heard about WireMock because it's one of the most popular API and mocking tools.

Yes.

Testcontainers started as a framework for Java. Then, a lot of people found this concept really interesting. It’s not rocket science as a concept because a lot of people use single-shot containers for testing. They created it super easy to use. Other people started using it and started creating implementations for other languages. After that, Testcontainers became super popular in the integration testing space.

For me, the story of Testcontainers is very similar to Docker because you provide a really good developer experience and then people start following you. They expand the ecosystem dramatically. AtomicJar provides the software on top of Testcontainers. For me, from an open source standpoint, there are multiple steps. First is a successful Java project and then a successful open source system, and the vast adoption.

This is something I'm trying to do with the WireMock tool. In fact, there are implementations for all languages. There are more than 50 various WireMock implementations, including standalone servers and disruptors that sell bindings. They want to somehow help consolidate these projects into a single open source ecosystem that would be available for all platforms and would help all the users. It sounds like a really good challenge for me. When the founders of the company reached out and were the first to talk, I was like, “I used this tool. I like it, so why not?”

As an example for myself and others who are reading, let's say I was using Stripe as my payment processor and I wanted to test or include that part of the application in some tests. How would WireMock help me with that?

For Stripe, it's super easy because Stripe has open API specifications. We have launched the API templates library for WireMock. It's an open repository of different templates. It is something like 2,000 templates including Stripe. What happens? You download JSON with a WireMock configuration and then get a service running in a second that emulates Stripe APIs.

In WireMock, there are a lot of things you can emulate. It’s not just if-then-if handling. You can filter the requests, for example, by endpoint, request type, and GET and POST request, and then WireMock can respond. There are a lot more features including stateful behavior when it can generate the response sequences. It can inject faults if needed. For example, response delays and failures. On an administrated basis, it can inject various arbitrary responses. Moreover, it can even proxy the request. It can be in front of your service injecting some additional use cases, reverse engineering your protocol, or validating your protocol for open API specification compliance, which is super useful for integration tests as I discovered.

There are a lot of things that WireMock can do. All of that you get either by downloading a specification for an existing service or you can build your own specification that follows the agenda. If you use a standard testing framework, let's say J Unit 5 for Java or even a robot framework, a co-test for Kotlin, Golang test frameworks, etc., there are bindings that help you to get started.

There are rules that apply to the WireMock server at the beginning of the test. You can mock it, define all the responses, and then transparently, you do your test. To some extent, it's like Testcontainers. The user experience is super similar. The difference is that you do not need an implementation. In Testcontainers, you can have them but only if you have something implemented.

In the case of if WireMock can operate fully on the top of specification, it is much less powerful in terms of implementing business logic because it will take quite a lot of time to implement business logic even with some scripting stateful behavior. Never do that. For various kinds of integration testing in the initial development, it is super handy.

I'm curious to know what percentage of SDKs exist in the way where you're not able to override the API domains that are being called. You need to do that before you can start using the platform. Is that right?

Not necessarily. WireMock doesn't need to write anything. Once you have specifications, you are good to go. For open API specification, we, as WireMock Inc., provide a set of templates for common use cases. How it happens is we take open API specifications, generate mocking definitions, and then apply some AI magic, etc. to make it even more reasonable, including test data generation, etc. This is the specific issue you get. You can amend it for your use cases. There is no real service when you use WireMock.

If I want to run a test using Stripe, by default, the Stripe SDK is going to go out to the production stripe endpoints to make its calls, right?

Yes. If you use an external Stripe, for instance, your test is going to be quite flaky. For example, Stripe may change the APIs. It's good because you will discover an issue in your integration test or there might be some outage or some maintenance window. Sometimes, they may ban you for submitting too many API requests and exceeding the limit.

What WireMock can do in this case, instead of having external service, is you can have a local mock that follows the standard of Stripe communications that has a set of behavior scenarios. For example, if you want to transfer payment, you send a request. You get a response with some metadata. You can define what the response would be quite intelligently. You can also do sequences, etc. WireMock mocks interface. It's not the real interface. It’s not really a model, but it's something that your test can talk to. If you need things like SSL, for example, and Stripe is available only through SSL at the moment, it's also possible with WireMock.

That’s really interesting. It’s a commercial open source business. What's the commercial model around it?

The commercial model is that we have WireMock Cloud. WireMock Cloud is not your WireMock as a service. It's bigger because it includes a lot of added value for API developers. How we position that is WireMock itself is rather a tool or a framework for various kinds of integration tests. When you use WireMock Cloud, it's a rather developer culture for you. There are a lot of collaboration opportunities. For example, teams can use it to collaborate on an open API specification to pass definitions between the teams to develop prototypes and share them with other teams for development purposes.

Also, there is a lot of edit value for quality engineering, too. For example, there is chaos engineering featured in the WireMock Cloud. It does not manage an injection of failures as WireMock does, but the randomized one does. If you don't like the Chaos Monkey, you can have the Chaos Mockingbird, which is also quite good for many use cases.

This is one thing we do, and it has been really successful. In May 2023, we secured the seed investment round so everything is going quite well business-wise. I can share the data, but I can already quote 1 of the investors so that we have more than 100 paying customers. Those are the larger ones, not $6,000 a month. , Also, what's going on there is there are a lot of experiments with providing it on premises. If someone needs to do API mocking and collaboration on the API development on premises or on managed solutions, let us know. It’s also a part of our ideas at the moment. This is what helps us to fund the WireMock development tool.

I have to mention that WireMock Inc. is a small company. It's still a major and primary contributor to WireMock, but a lot of other projects are being developed by the community fully independently. As a part of community consolidation, etc., I believe that this is the model that must be retained. There would be a lot of ecosystems and a lot of implementations. Everyone is welcome to participate. At the same time, I can see why I'm looking to be somebody or an entity that helps this ecosystem or be a steward of this system. I wouldn't see it as owning the whole ecosystem because it doesn't make sense in the modern world.

What's next for the platform? What's being built out in the next 6 to 12 months?

For the open source project or the platform itself?

Either.

For open source, we have a public roadmap that was published a few months ago. There are a lot of things. First of all, we are working on WireMock 3. It's a new release of WireMock with a lot of added value changes. With a lot of changes internally for performance, etc., we are dropping Java support. It becomes much leaner in terms of the implementation. Also, ecosystem-wide, there are a lot of efforts to streamline things. For example, we have released a Testcontainers model for WireMock in Java. There is also a Testcontainers Python model and a Testcontainers .NET model. The way is the same for .NET.

For other repositories, we do a lot of consolidation at the moment. We work with the maintainers of these repositories. We try to define how we could collaborate. For example, there are a lot of opportunities for alignment in terms of configurations in terms of APIs because WireMock separates in standalone mode as a server. Hence, we need APIs. There are also some discussions about integrating OpenFeature there. Let's see how it goes. A lot of unification and a lot of consolidation has to happen, too. I believe that it will be a really interesting journey for the project in its community.

That's awesome to hear. Thanks so much for your time. Finally, are there any open source projects that you've discovered that you're interested in and want to share?

Regarding projects, it is quite a good question because I discover a project every week or so. Sometimes, I am not ready to even spend time on that. For me, the key highlight of this is Penpot. It's a new open source platform for designers by Kaleidos, which I find super helpful. For me, when you do community management, you do a lot of design things. For me, this is the project I would like to highlight.

That's great to hear. We’ve had them on the show a while ago. I was watching them closely when the Figma and Adobe acquisition was announced. I saw their GitHub going crazy. That's a great shout-out. I concur. That's a really cool project. Thanks again for your time. I wish you all the best of luck with WireMock.

Thanks a lot for inviting me.

Important Links

About
Oleg Nenashev

Oleg is a passionate open source and open hardware advocate who believes in open communities. Currently he works on building user and developer communities around WireMock and WireMock Cloud. He is a core maintainer and board member in the Jenkins project where he writes code, mentors contributors and organizes community events. Oleg is a technical oversight committee member in the Continuous Delivery Foundation, and also a CNCF ambassador. Oleg has a PhD degree in electronics design and volunteers in the Free and Open Source Silicon Foundation.

Available for talk, coaching and workshops on:

Subscribe

Learn more about CI/CD, AB Testing and all that great stuff

Success!
We'll keep you up to date with the latest Flagsmith news.
Must be a valid email
Illustration Letter