Dynatrace

Interview with Alois Reitbauer: Chief Technology Strategist, Dynatrace
By
Ben Rometsch
on
August 22, 2023
Ben Rometsch - Flagsmith
Ben Rometch
Host Interview
Host Interview

Alois Reitbauer
Chief Technology Strategist
Alois Reitbauer
Chief Technology Strategist
00:00
/
00:00
https://d2iwv8pn9yf3nf.cloudfront.net/xdhiWBbPE.mp3
00:00
/
00:00
https://d2iwv8pn9yf3nf.cloudfront.net/xdhiWBbPE.mp3

When it comes to technology, having reliable monitoring is crucial to ensuring a positive user experience and the seamless operation of an app. Dynatrace specializes in enhancing the digital environment of organizations with their cutting-edge solutions, unlocking the full potential of digital intelligence. Their innovative platform has the power to propel your business to new heights.

In this episode, Alois Reitbauer, Chief Technology Strategist at Dynatrace, dives into how they discern the fundamental specifications and overall project vision beyond its individual components. He also explains the development of OpenFeature and how collaborative efforts within the team contribute to project success. Alois reveals the instrumental role of the open-source canvas within the company.

---

In this episode, I'm interested to talk with someone who I've been working with professionally for a little bit of time. Welcome to you, Alois Reitbauer, who is both the Chief Technology Strategist and Chief Product Officer at Dynatrace.

My focus is on open source but I'm also leading our internal research activities, everything that's forward-looking product research. I'm acting in my product management role.

For those who aren't aware, can you give us a brief potted history of Dynatrace and where Dynatrace is as a business?

When we started back in the day, Dynatrace was called APM, what we called observability. It was Application Performance Monitoring. Initially, the uniqueness of Dynatrace was that we could collect distributed traces in production and that was a long time ago. That's how it evolved. More or less, the next generation of the product is involved in using AI. Everybody's talking about AI so we're not talking LLMs and this type of AI but causal AI to find root causes and problems. That's how we evolved more and more into an analytics company. We move from data collection and processing to analytics. We are moving more to also adding automation capabilities.

The goal is more or less to have hands-off operations and support people in their day-to-day life with a set of technology and eventually automate a lot of tasks, simple things. If you realize that there's a certain problem in your environment, it automatically triggers a remediation action and even automates operational tasks. That’s the short history of where the company's coming from and evolving.

Our overall goal is more or less to make the software run itself eventually and also secure itself. There's also a security aspect, observability and security from our perspective of blending more and more what belongs together almost in these environments. Self-securing, self-healing systems that we are working on. The creator always makes the software work perfectly. That's where we are heading at. That's background about Dynatrace in a brief way.

What was your first contact with open source software as a consumer of it? What was your first experience of working and contributing to open source projects?

Open source, for us, started strategically with the OpenTelemetry project and its predecessors. Tracing was relevant for us. We built a lot of our tracing technology internally. Our agent technology, our instrumentation technology, and all of this were proprietary and built before OpenTelemetry came along, even the early phases of open census, open tracing, and before there was this distributed tracing group that was meeting every couple of months where we thought, “It makes sense to collaborate here.”

We are building all the same things. All this instrumentation technology that we are building to instrument certain libraries and frameworks, we are all doing the same. There are ten companies out there that do the same thing with the same outcome. For everybody, it was hard because of what you did in a lot of these instrumentation technologies, and also, OpenTelemetry as a project provides way more possibilities to do it for the actual middleware providers, whoever provides software to add the instrumentation.

Back then, we had to reverse-engineer the software to build the instrumentation. Whenever there was an update, this process had to be repeated. Think about this. This wasn't just one company doing this. Many companies were doing it. That's where we said, “This is a project where we need to strategically get involved.” By strategically getting involved, I mean joining key activities on the project, actively contributing, and also trying to support the project, drive it in the right direction, and help out there.

I can't remember who it was. It might not have been Dynatrace. It might have been New Relic but I saw that they had a Python library that was Django-aware. I remember thinking at the time, “They've gone from building SDKs in a bunch of different languages to then starting to say, “Now that we've done that, let's start making it so that if you want to instrument Django, you drop it in and it's a few lines of code.”

I was thinking, “There are probably twenty web frameworks across different languages that are popular and they're all moving targets as well.” Previous to OpenTelemetry coming into existence, was there a feeling that the business was close to the point of them having to throw their hands up and say, “We can't manage this anymore?”

I wouldn't say it wasn't, “We can't manage this.” It’s, “We have established processes.” With more and more frameworks appearing, you are always behind. Whenever somebody changes it, there's even internal tooling at Dynatrace where we track pretty much every library we support out there on every change that's happening. It's a lot of effort that you're investing. The question is, “Where do you want to spend your time providing value for your customers?”

This should be unique to your company and everything everybody agrees on in the industry. It’s the most important metric or method course you want to instrument in Django by your example. There's an oral agreement and there's nothing to differentiate yourself. As a business, you always need to ask yourself, “Where do we differentiate? Where do we add value on top of what everybody else would need to do as well?”

COS 50 | Dynatrace
Dynatrace: You always need to ask yourself where we differentiate, where we add value, and what everybody else would need to do.

It's more or less this business decision where we also decided, “Instrumentation is this problem. It's a lot of work. If we do it, we have to charge for it.” There were a lot of people, for a long time, working on these topics. This is where I want to strategically grow as a company and where the whole industry is moving. Even back then, we saw the industry more moving towards analytics.

I like to share that story. When I started early on in the early startup days of Dynatrace. A ten Java server environment was a big environment. They had these big ten boxes there that were big. A hundred servers were big a while back. Suddenly, we started to talk about 10,000 and then we talked about 1 million containers that people were running.

For a long time, Java was the dominant language in enterprise software. It still is but it was mostly Java, then dot-net, and some others coming in. With people moving towards microservices, you see different architectures and problems. Also, more languages and hybrid environments that you're dealing with. The complexity increases.

Another trend was that a lot of instrumentation technology relies on that you are able to instrument the service, more or less changing the source code by adding these instrumentation points, which works fine if it's your environment or infrastructure. Once you move to multi-tenant shared cloud services, this is no longer possible. I can't instrument an AWS service so they have to provide OpenTelemetry data to me for the traces that I have in there.

There are some limitations where this didn't even work. It's not that it was unmanageable per se but the cost-benefit ratio at some point wasn't there anymore. As the whole industry and requirements of people were changing, where do you want to put your focus? What do you rather want to commoditize and not differentiate on anymore?

That's interesting. I hadn't thought about that. If you've got a runtime in a Docker container, you've got no idea what that process is doing. You can see a little bit of your bit but it could be doing 101 other things at the same time for potentially for other customers.

The Docker container was not so much a problem. We invested back then a lot of time being able to instrument in there. Keep in mind that a lot of the things you have to do are tricks on the technical level where you have to engage. Things break because you're not sticking to officially available APIs in some cases. You do things that you have to come up with how they work. We did a lot. The instrumentation technology still has a lot of value. You can throw pretty much any Docker container, any runtime at the Dynatrace agent. It'll automatically instrument it for you.

In the past, all this instrumentation had to come from us. With OpenTelemetry, if there's OpenTelemetry instrumentation in there, we can leverage this right away. If somebody updates the framework like the framework author, they can provide more or less the best-of-class instrumentation than they see for the specific framework. Eventually, this helps you to increase the breadth more significantly. Also, you distribute the load slightly away from the people who do the monitoring and observability software to the framework providers who know their frameworks better anyways.

In terms of OpenTelemetry, was the project something that had been in the idea of the company for some time or was it something that was like, “Wouldn't it be great if we all got together and did this?” Was there someone who suddenly came up with this idea? How did the idea originally come into existence?

In all fairness, we were not the ones coming up with the idea. It was across a lot of companies. All of them are part of OpenTelemetry. We didn't have that problem because we had technology coverage back then. We might not have started it because we didn't have the necessity. Many of those other companies didn't have the agent technology or instrumentation coverage that we had.

We could have continued the way we had been working back then but as the project came along, it became obvious that this even for us makes a lot of sense. That's why we then immediately wanted to join in. In all fairness, we would not have been the company that started it but as we saw it emerge, we saw the value even for us to contribute to the project immediately given where we wanted to go as a company as a whole.

In terms of coming up with an agreed specification for all of these different vendors and what must be deeply technical projects, sets of specs, and things like that, how rapidly did that come together?

Specs overall take time, always usually when you join a spec project. We initially started and talked about open source contributions, more from the standard side and some WSVC standards that we used to work on like timings, the browser, and so forth. It was first nest timing, resource timing, beacon, and a couple of standards that we put out there.

Specs take time because you need to find agreement amongst vendors. There are different technical opinions but there are also existing implementations. If you go back first approach, it's usually the more challenging way to build something because you need to get all the agreement up front versus building a project and hoping that others will eventually adopt it and change their implementations.

Down the road, it creates a common understanding and way to approach problems. It ensures that later on, people will conform to that specification. When we started some work also on how we are forwarding the trace context, for example, there were existing implementations from everyone. There was no agreed way. Conceptually, we all agreed on what we needed to do but still, everybody was doing it differently.

It means everybody needed to touch their implementations. Touching their implementation might sound okay. You're changing a library and things over but in reality, this is way more complex because if you have your proprietary implementation and software deployed at customers, which you usually do when you are observing systems because you have deployed it there, you need to support your existing implementation and the new one simultaneously.

You also have to ensure that there is a way for them to interact so that even both might be active at the same time. If you don't think, for example, about the cloud providers, they had existing implementations for their cloud services, which they needed to adjust. Sometimes a dash in a specification in a name might result in millions of dollars of implementation work.

You have to understand this. Know that the person is insisting because they're trying to protect an investment that they have made as a company. That's why this process is so important. It's always a longer process and that's what frustrates people a bit. They want to move fast on specifications. Guess why we are discussing this? We could make our life much easier. Eventually, that's what makes sustainable specifications that people buy into. At some point, everybody has to compromise.

Usually, the goals, in the beginning, are much bigger. Even back then, some people complain about the trace context. Why is it so simplistic? Why doesn't it cover use case X? Why is that? Why can't it be directly compatible with what is already out there? There are good reasons for all those things. You keep repeating these conversations but it is a longer process. You might be able as a company to build within two months on your specification-wise. It might take a year or longer if you do it in a collaborative fashion. That's the way you agree on.

As a company, building within two months on your specifications might take a year or longer if you don't do it collaboratively.

In terms of the amount of effort and resources that Dynatrace was putting forward to the OpenTelemetry project, how as an organization did you guys figure out what the right amount of time was? It's Split. Some of it is working for your SDKs and writing those new interfaces. In terms of working on the core specifications and the projects as a whole outside of the Dynatrace-specific components, how did you go about figuring that out?

We started with a small team. That's what we usually do in open source projects, maybe 2 or 3 people depending on the project. Also, the majority of the project is where we start. The way we structure it internally, everything that is related to the product is built by product teams. We have dedicated teams that are responsible for upstream and downstream work.

It's also a big success of the way we are working because, to the outside world, we are engaging in the way that the open source project and the community expect us to engage by participating in community discussions, working with public GitHub issues, and all those processes in place there. Internally, to the organization, mimicking to behave a product component and providing them more or less than open source component service, SDKs, whatever it is as a service, and then also taking out the complexity of contributing things back upstream.

The team also takes care of upstream contributions on some topics. They collaborate directly with the product teams. It grows over time but it's also driven where we, as a company, see our main interest. The contribution of the interest is driven by business relevance. To put it simply, you could say everything that's part of our product, providing core value to our customers, and is related to open source. That's where we also ensure that this open source project or component is also successful.

We ensure that we have maintainer rights and that we know that we can contribute back upstream, which is important because otherwise, we would end up with this orphaned version of an open source component that we would have. This work is also decoupled from the product work. We know, “This is what we need to do for the product.” These are additional activities we need to do upstream. That's how we come up with the proper number of people to work on as you regularly plan how much R&D talent you put into the problem or a specific task.

Was there an understanding within the teams that were working on OpenTelemetry as to how successful it would be? You told me about a keep-on that is the second-largest project in the CNCF behind Kubernetes. Did you guys all realize at the time that it was going to be this successful?

Let’s put it this way. We were sure from the beginning that this makes a lot of sense. This will have an impact and this will be the direction the industry moving into a certain area. What was surprising to me and still is, is how many people are adopting OpenTelemetry directly. My expectation is way more that you will see other projects, framework, implementations, and so forth, adopt it, and then end users have to adopt it less. It should eventually disappear.

You use a library, some form of auto instrumentation, or configuration. You will have OpenTelemetry, which is there but you shouldn't have to write OpenTelemetry code yourself. That was a big surprise that it went out so much into the end-user community, which was unexpected. It might change in the future as more libraries come with OpenTelemetry support out of the box.

You won't have to write it. I still believe that most end users should not have to write instrumentation and telemetry code, especially for a web framework. If other people have to do it, it should already be part of that library. There are some exceptions, where there's specific business logic that you want to trace that you cannot put into a framework. This is always dependent on what you're doing.

If you look at it, you had all the big observability. Back then, APM vendors had an interest in starting to contribute. New companies were also entering the field of tracing specifically but the whole observability space is more in a modern form. Frameworks and middleware providers are adopting it so there's a large audience. It's not a big surprise that it gets adopted widely. The only surprise is how much still end users are writing OpenTelemetry code. I see this going away in the midterm.

Are they use cases that it's solving that weren't ever considered when it was originally designed?

It does exactly what it was built for. We are still working on that. The team is still working on the roadmap that they had for metrics, traces, and log types of data. The collector came along. It keeps evolving as we do, especially the work that you see on the collector with everything you have in the exporters, the receivers, and the ability to also transform the data as you go adding remote management. A lot of these things are technology maturing as it goes. The semantic specification is constantly evolving. We're in this phase where technology is growing up a lot. That will evolve over the next couple of years.

Overall, the goal did not change. What you can see is that more signals are added, thinking about profiling data and so forth. These were things that you will always see as this space itself is evolving and new technologies come aboard. The governance board is doing a great job and keeping the team focused, which is important. I remember this discussion was about the unified observability query language. Should this be directly part of OpenTelemetry? This is more handled as a working group in tag observability within the CNCF. It will eventually make its way back. It didn't get an increase in scope, maybe to some extent, but it's still following the same chart with the same principles as it always did.

Coming onto OpenFeature, which is how we know each other, can you talk a little bit about whose idea that was and how that project started live?

OpenFeature, for the people who don't know about it, is a standard for feature flag definition. There are also some out-of-the-box components for a free evaluation. How this came along was when we looked at Dynatrace. We did tracing and we wanted to analyze what's the impact of a certain release. What's the impact of a certain feature? We looked at the market and said, “This is what we need to do.”

COS 50 | Dynatrace
Dynatrace: The open feature is a standard for feature flag definition. There are also some out-of-the-box components for free evaluation.

Tying it back to OpenTelemetry, you would have to build specific instrumentation for the SDKs. We started to look at the SDKs and thought all doing the same thing. I looked at the Flagsmith, LaunchDarkly, the ones that GO Feature Flag was using, Unleash, Split, and all of them. We have a good agreement with what the SDKs did.

At the same time, we thought, “While they're all conceptually doing the same thing, they're doing it syntactically differently and that makes it hard.” For us, the challenge was thinking there are roughly ten-plus vendors out. There are ten different run times. This would make 100 implementations we would need to maintain for no good reason.

Even looking further, as we discussed before, what's the differentiation? The SDKs per se are all open source. You can get the SDK for free. You're paying for the management service and some of the runtime components that you're using. They were thinking. Even the vendors have opinions on how things should be implemented but they don't see that they're selling the SDKs anyways.

For them, having unified SDKs makes their job easier. It makes it easier to switch from one offering to another. Whether you start with a simplistic approach or an approach that's more focused on development, we want to switch things on and off towards a fully-fledged feature management system that supports everything from experiments, cohort definition, and all the bells and whistles. The SDKs were holding people back and there's no value.

Similarly to OpenTelemetry, there are a lot of people writing pretty much the same SDK code around the globe without seeing it as a core product differentiate them. We thought, “Let's talk to the people out there that don’t make a lot of sense. Couldn't we have one set of SDKs?” Taking a lot of design inspiration also from OpenTelemetry when we decided on the provider, we can then hook in a specific vendor implementation that will benefit everybody.

Eventually, nobody will have to write their SDKs anymore. There is this split work between independent projects and commercial solutions. People feel more comfortable. They can start with a simplistic approach, even using a flat file for feature flex if this is what they wanted but then moving to another solution without ever having to touch the source code.

When we talk about feature flags, this is not that one library that you changed and suddenly, things are different. You would have, in the worst case, to go through your entire source code and find all the parts, every single line that's calling for a feature flag evaluation. From an end-user perspective, once you're moving to one solution, it's almost impossible to switch or the effort is massive, which is also then the motivation, the vendor side, convincing somebody, even if the customer is fully convinced that they wanted to have this fully-fledged solution compared to the often what we still see these in-house based solutions that they built beforehand. The effort is massive.

As we started to talk to more people, we said, “We are not using any SDK directly anyways because we exactly do not want to have this problem. We built our abstraction layering there.” There are even more people writing even more SDKs out there. The case that I run on top of it is the turtles on top of the turtles’ approach. We did a success for the project because everybody said, “What we are building here isn't adding any value.” Conceptually, we all agree this is the syntax we have to figure out. Eventually, it's helping everybody, whether it's the consumers or providers of projects and tools.

It's interesting what you say about it being the worst case of you having to go through all of your source code. In our experience, that's the common case that a lot of people we talk to have. They've got a huge undertaking to try and rationalize that all down. From Flagsmith's point of view, as soon as you guys talked through the idea with us, we were like, “This is so needed.” How much did the industry as a whole take? Was everyone like, “This is an obvious problem?”

Everybody agreed on the problem. The solution still comes with implications. If you have any established technology, that's maybe a bit the difference, not to some extent. In OpenTelemetry, the established vendors had their instrumentation technology. You are in a situation where as a vendor, you have to decide which road you go and what all the implications are. It means work. It's not that you switch over to something which also provides a great opportunity. There is an investment.

You need to make a business case for every investment you're doing even in open source. The arguments that are brought up are why it also helps a commercial vendor or a specific project to move in that direction. That was mostly what the discussion was about. It wasn't like, “This makes sense. We should do this,” but, “How can we get there?” Depending on the size of the organization that you were talking to, “How can I go to my CTO or CEO and explain to them that we are doing this? It's going to cost us time that we have to take away from other projects?”

It comes with some risk but eventually, it's going to be relevant for our business. That's why even in the beginning when we had this conversation, part of the initial pitch deck that we built for OpenFeature when we talked to all the individual providers in the industry, there was a slide in there about how this makes sense for your business, how this eventually will help you to save implementation cost, enable you to onboard customers faster and so forth. This was one of the big successes that we didn't say, “This is a great open source project. Think about this. It's amazing. Isn't this how the world should be?” That's what everybody agreed on.

It was done in 2 or 3 minutes of these calls but then it's like, “How do we justify this? How can we sleep at night?” You have to think things even in your case, “When do you recommend people to use the OpenFeature as the case over your own ones? What's your long-term support plan? How are you going to handle this?” We'll have some double investment over a certain period in one or the other areas. How does all of this workout? How do you put it into a business plan? This was the important part of these conversations. I found myself discussing way more of these topics than whether the overall technology made sense or not.

It is human problems rather than technical ones, quite often.

Human problems but also business. Everybody is always short on people who can work on interesting new features and functionalities. You're writing this thing on top that doesn't immediately provide value but is a strategic investment. Every company needs to figure out how they want to play to get involved early, ensure that everything works my way, watch it first, and see whether it's becoming successful.

Every company needs to figure out how they want to play, get involved early, and ensure that everything works their way to see whether it's successful.

When does it go all in? How much do we go in? How do you decide how much of your investment is the right investment? What's your conversation with the market? You even have to think this through all the way. Flex Mist say if they are in may be part of a press interview and you'll get the questions, do you think your SDK eventually disappearing and everybody using the OpenFeature SDK? What could the potential answer be?

For me, that was also the advantage, them coming from a product background and having product responsibility with go-to-market and everything to understand these questions. That's what I learned in open source. It is about the technology when you build the project but when you want to convince organizations to take strategic investments, it's less about the technology because this makes sense easily. It's like, how do you embed it in that framework of the company into your operating model and how you do things?

COS 50 | Dynatrace
Dynatrace: Convincing organizations to make strategic investments is less about the technology because this makes sense easily. It's how you embed it in the company's framework and into your operating model.

You've got a lot of experience working on projects, bringing projects into the CNCF, and maturing those projects throughout that organization. If someone is sitting, reading this, and going, “I see these same problems in my particular part of the software world,” what are the first steps that they can take to try and get these projects started and get support through organizations like the CNCF?

The biggest learning for us was having difficult conversations at the beginning like the vendor conversation we talked about. Get other people on board. Don't say, “We built this project. Everybody will like it and then the people will come.” You most likely have a good feeling who might not be super comfortable with the direction that you're going. Have these conversations. Try to convince these people. Get them on board. Build a diverse leadership team early on in different organizations. See how they're committed.

Organizations come and go. Business priorities change but build governance early. Get agreement on the goals. Also, openly work with them and help them to work on the business case, covering these aspects together with them. This also helps you because you have a working governance model and maintenance model. A lot of those things have blueprints. We were inspired by a lot of information that's available.

You need to have this independent body. In this case, a foundation that has certain rules that you can leverage for your project and certain standards. It's not you driving the project but putting it into a bigger context. Even if it's your initial idea, try to increase your overall influence and control very quickly because you built this. You wanted it to be a community project. You can't be the single driving source.

As much as you might like to be in control of everything, it is not a good sign. The more you have conversations, sometimes the more you feel you would be maybe 40% faster if you would do everything yourself. The conversation that you're having and slowing you down might not feel like an ideal situation but it's good. You're having this conversation. People are speaking up again if people participate in conversations. You know that the topic is important for them and they’re also speaking up because they want to get them resolved collaboratively.

It is good that's happening also but it's eventually slowing you down. That is the recommendation I would give. Usually, talk to the people first, whom you want on the project and who you think is most likely not going to join the project. Learn from them and their arguments. You might convince them but even if you don't convince them, you will convince others. Don't try to have easy conversations first. You're fooling yourself with what you're up for. That also helps a lot with our initial pitch deck to get people involved.

For example, talk to the market leader who's having a proprietary resolution and has the least interest in going that direction. See what they do. If you can convince them and what their arguments are, then move to others. If you move to an early-stage startup that has to build all this technology that would appreciate if it's available, you won't have a hard time convincing them. The beginning is not sitting down and writing code.

We did this on other projects and then shared it with the world and got them on board. A lot of the work initially we did was talking to people and also being willing to do other people's homework. For OpenFeature, we looked at all the SDKs. We were ready for every single conversation we could show people. Maybe that's the counterpart to having to build software. We held an early-stage research prototype. We could show this works. It's not a lot of work to do.

The goal was never to release this code. We always agreed we built this to showcase to people that this works and it's not a lot of work for them. We then throw all this code away. The funny thing though was this code and then as we were talking to so many people and it was evolving, as we were speaking, we eventually did not throw it away because it already did quite a lot of what it was supposed to do. We were committed to doing this. The first couple of months were about getting everybody on board, building common understanding, and then getting into work once you have the commitment from people.

Get everybody on board, build a common understanding, and get into work once you have people's commitment.

One of the things that always strikes me in the feature flagging landscape is for something that seems so simple, every vendor's got a slightly different approach. It's crazy. Some problems in engineering get simpler the more you look at them. Other problems in engineering sometimes get more complicated. In terms of things that you want to promote or projects that you're interested in, you mentioned one project before we came on this interview that you've been involved in if people are wanting to try and get one of these ideas up and running within an organization that they're working at.

We were talking about the Open Source Canvas. If you go to OpenSource-Canvas.org, it's a micro website and a small graphic or PowerPoint template picture. It's a business canvas but built specifically around open source. The idea why we build this is it covers all major aspects when you start an open source project that you should be able to answer. As you start the business, you're doing your business model canvas. This is about doing the same for open source. This has become instrumental to us at Dynatrace from a strategic planning perspective, answering all those questions.

There's deliberately not a lot of space in there because you should be able to more or less phrase everything in 1 to 2 sentences. It shouldn't be too complex. If it's too complex, you don't understand it well enough yet. That’s my takeaway. It's about, “What's the strategic goal of this project? Why does this project even exist?” The next question is, “Why should it be open source?” Open source usually involves collaboration. It has a certain component of overhead to it. It takes longer. Your overall investment is going to be high. Why is it worth it?

What's the value to the community? Why are you building this? Who's your target audience? It's important. Who are you building OpenFeature for? Are you building it for feature-flagging vendors? Are you building it for end users? Why should anybody contribute? What's their value? Somebody needs to invest their time so they need to have motivation. How do you even want people to engage? Whom do you want to use it? Where do you want to have contributions? What are your activities?

A very important one we have is the business value. Often people say, “We have the open source project,” but then we transition people away to our commercial offering. That's the wrong way of thinking. If you think about it, that's superficial. The first thing you should have a 100% buy-in versus you want your open source project to be successful. You want people to adopt it. Your open source project does not threaten your commercial business model. The more people are successful with your open source project, the more you see success for your commercial offering.

You have a differentiating business value to bring to the table. Not just saying, “I'm hosting this open source approach and providing a SaaS service around it or something like this.” Ideally, you change the use case and the target audience. If you change both, you're having an entirely different product that you're selling versus the one people are adopting. It's always like you're solving one problem with the open source project and solving another one with the commercial one but to fit together and it almost feels like a funnel.

COS 50 | Dynatrace
Dynatrace: The more people are successful with your open-source project, the more you see success for your commercial offering, and you have a differentiating business value to bring to the table.

If you can't figure this out, at some point, you will have issues. Your commercial offering against open source will spin some competitive situations. That's why this is very important. You need to get an understanding of how much you're going to invest. You have to take an investment and compare it to business value, and then you know whether it makes sense for you or not. That's what we put in there and that's also the framework that we're using at Dynatrace to assess any activity that we are running. It was super helpful. It creates a lot of clarity.

We deliberately put it on a one-pager. We keep using it for every idea. Not every idea survives, which is fine because we know why it didn't survive or where we missing clarity. It's a different reason. Sometimes the business value is not clear. Sometimes the value to the target audience is not clear. Sometimes also, people are not clear about what the investment should be or didn't have the proper investment buy-in but at least it's a deliberate decision and it asks all the important questions. That's why we created it.

Are people filling this in fairly regularly within Dynatrace?

Yes. If it was a project that you want to contribute, you have to fill it out.

OpenSource-Canvas.org. This is a great idea. It's funny enough, Flagsmith came into existence with something similar that we were doing. We were trying to figure out how to spend our spare time. One of the team put together a one-page like this that we're trying to evaluate criteria for ideas. I'm a big fan of this thinking.

It helps people to have a structured approach and it has hard questions in there. I'm a big fan of asking the hard questions early on. Usually, if I'm not feeling comfortable asking the hard question, I should be asking it. It's a good indicator. If you don't want to ask a question, ask it. There is a good reason. Your gut is mostly telling you that this is something you should be discussing.

Even if you don't want to ask a question, ask it.

Alois, thank you so much for your time. It's been super fascinating. For those of you reading, you can google OpenFeature and the giant that is OpenTelemetry. We are always looking for contributions or people joining the community at OpenFeature. If there's something you want to work on, the great thing about the project is it's easy to come into contact with it. A lot of people are using feature flags. We're looking for people to come and join in. Alois, I look forward to speaking to you again soon.

Thanks for having me.

Important Links

About
Alois Reitbauer

I am leading Dynatrace's strategic open source initiatives for projects including OpenTelemetry, OpenFeature and Keptn as well as developing industry-wide standards - most recently co-founding the W3C distributed tracing working group.

As an active member of the cloud-native community I am co-chair and founding member of CNCF TAG App Delivery and speaker and program committee member of conferences like KubeCon.

I bootstrapped several commercial product offering like Dynatrace's second generation platform and most recently the Cloud Automation module

Available for talk, coaching and workshops on:

Subscribe

Learn more about CI/CD, AB Testing and all that great stuff

Success!
We'll keep you up to date with the latest Flagsmith news.
Must be a valid email
Illustration Letter