RackN

Interview with Rob Hirschfeld: Founder and CEO, RackN
By
Ben Rometsch
on
June 13, 2022
Ben Rometsch - Flagsmith
Ben Rometch
Host Interview
Host Interview

The thing that makes open source so powerful is collaboration...What makes automation so hard is we don't know how to collaborate with it

Rob Hirschfeld
Founder & CEO
Rob Hirschfeld
Founder & CEO
00:00
/
00:00
https://pod-feeds.s3.us-east-1.amazonaws.com/AxQ6dqLUY.mp3
00:00
/
00:00
https://pod-feeds.s3.us-east-1.amazonaws.com/AxQ6dqLUY.mp3

Check out our open-source Feature Flagging system – Flagsmith on Github! I’d appreciate your feedback ❤️

Ben: I’m excited and interested in talking to Rob Hirschfeld, who’s got a super interesting background in heavy lifting infrastructure. Welcome, Rob.

Rob: Thank you. It's exciting to be here.

It is difficult to research you and your projects because they dig deep into the infrastructure world. I wanted to start right back from the beginning and ask you, how did you get into programming and infrastructure?

I started an engineering track career-wise. I was much more interested in programming robots, factories, and things like that. I've always been a programmer and an engineer. There's an interesting intersection, but my programming days go back to PET Commodore back in middle school. I was reading the basic instruction book in programming and simple games on the PET Commodore. I was lucky to have a lab in my school and a parent who didn't want to pick me up until after work. They would be like, “You need to entertain yourself for a couple of hours after school.” That ended up being a very good introduction to that. When I was starting my career, I had this painful experience where I was writing, doing programming and consulting for companies. I would deliver the programming. I could estimate how long it would take for me to build a program.

When I went to install that program, it was painful. It took more time to install, maintain, fix, patch and do all that stuff than it took me to write the program. This was back in the late ‘90s. That got me connected to Dave McCrory, who is well-known for his Data Gravity work. We would start a company in 1999 that was an application service provider. Nowadays, we would call it cloud. We started that company because we were both frustrated with how hard it was to get software onto people's desks. He was doing it through Citrix. I was it through Web.

To run somebody's data center for them, which is what the cloud does, we found that there wasn't much process. People didn't know much about what to do. We were the first people to spin up ESXi beta or ESX at the time. We were the first people outside of VMware to get that beta working because we needed virtual machines, and then automated them like crazy, which is what led us down the cloud path. It was super disappointing to see how hard it was to build automation around infrastructure. That ear-worm of a problem has been driving my career since. I'm trying to figure out how to make that less painful and less toil.

I was a software engineer at that time. I remember I left consulting and got a job in what now is regarded as a digital agency back in the dot-com boom. Deploying the project that we built meant buying a couple of servers, finding a data center, driving the servers to the data center, cutting your hands, and racking them in. That was the reality for the kids these days. It's a lot of foresight to consider automation back in those days. Back then, it was CVS or then SubVersion. When I started working at that company, we were passing thumb drives around.

It was crazy. The state of automation compared to the state of development is that. The whole infrastructure-as-code movement, which we are huge fans of. I want my automation to be more development-like. We're only at the surface with this. We're excited that I can check our development script into Git. Many years ago, we were like, “We need to source code to control our code.” Now we're getting excited many years later at doing it for automation. There's so much more we can do on that infrastructure side, but it's hard.

The other thing that has changed a huge amount since then is virtually none of the software tooling that we used was open. I remember there was a Java. It was called Blue Martini. It was a Java app server, and it was $20,000 per CPU. I can't recall any open source software that I was using. I guess Napster. Was Napster open source? It probably wasn't even open source. There wasn't anything.

We would download the binaries and not even think about it back in the day.

Obviously, there were Linux, Apache, CVS and some version. How did you get introduced to open source and how did you end up making that your career?

I got there through the OpenStack app, but it's worth giving you some backstory. Before that, I was doing a lot of Microsoft, Visual Studio, C#, and all of my first startup stuff. We were early C# people and happily in the Microsoft camp. I ended up at Dell in this hyperscale group. They were selling servers to the biggest customers. We were a quarter of the servers shipped globally. You can imagine who the customers were. We were hired in as a software team who ship clouds from the factories on this new class of hardware, from Dell to the next tier down of people. We did ten solutions in one year, which for Dell is a blistering pace.

We partnered with ten different companies. It was an insane pace. Of those solutions, every single one of them was acquired or didn't let us contribute to their codebase. We had some cloud partners who were like, “You all are hardware people. I don't care what your software bonafides are.” They give you a nice pat on the head, “Don't worry about it. We'll take care of it.” They were writing their own bootstrapping automation using Bash and SSH. They were using Chef or Puppet which were still new. Ansible was still early.

What we realized was that we had to do much better at bootstrapping these environments. This is the thing that people forget with open source. They're like, "It's free. I don't have to pay for it." The thing that makes open source so powerful is collaboration. That's why we come back over and over again. What makes automation so hard is we don't know how to collaborate with it.

We got into this very crystallized moment where we had done ten solutions in one year. A lot of them didn't go anywhere or didn't allow us to participate in their solution. That was frustrating. When OpenStack and Hadoop showed up, we were able to take all this code we had been writing and turn it into something that was part of a community and people could collaborate with us.

For us at Dell, it was a project called Crowbar that was literally an OpenStack installer. We were the first OpenStack installer. We open source it all. We converted it so it could install Hadoop also. We got a lot of attention. It felt great to do that, but there were things that made it hard for people to contribute to that. It's funny because OpenStack and Kubernetes struggle with the installer question. Every software has trouble figuring out how it gets installed and bootstrapped. It's been a real challenge to do that as an open source thing. There are a lot of reasons for that.

Did you have to persuade Dell to open this code?

The nice thing about this is that we had buy-in all the way from Michael Dell to participate in OpenStack. It was an easy lift for us to say this is part of OpenStack. There's a lot of OpenStack stuff. One of the problems with OpenStack is that there were a whole bunch of companies that said, "We want you to participate with this. Bring code." We had some but for other companies, it was much more of a stretch. I think you see this in the Kubernetes community. I've described that as a big tailgate party, where everybody shows up and opens their tailgate and says, “This is what I have to offer.”

Sometimes these big projects end up where everybody is coming in and wanting to drop stuff they have in that project. For us, it definitely caused some angst. At Dell, they had to figure it out. We used a cartoon logo, which was strictly verboten from a branding perspective and we got away with a whole bunch of stuff. I do think that you look at how a corporation looks at it and it's a very different thing to take open source code and put it out. It's incredibly different from maintaining, productizing and staining it.

Obviously, this is all before containerization. What were the goals of OpenStack when it was originally born?

I was on the board for four years. Although I got to inherit this part of the vision. To their credit, it was designed as a multi-vendor collaboration. At that moment, VMware was trying to create cloud standards and it looked like it was going to be them or Amazon. Amazon wasn't allowing people to use their APIs. They were fighting those as copyright. It turned into a non-issue but it's still problematic. There are no standards for cloud at all. It's usually complex.

By design, we wanted different vendors and partners to collaborate in this space. There are some other elements that came in around that, but that was a very important thing in OpenStack formation. It was the idea that Dell, HP, Lenovo, Huawei, and all these companies would have a place to bring and come together. It also created challenges because that included Microsoft with Windows, Citrix, and even VMware. They still have a nice OpenStack implementation, but the top priority was the collaboration and not the product. A lot of open source starts with, “I need to solve one very specific thing and I stack it up.” It’s to provide virtual virtualization and not containerization.

This must have been one of the first times that those companies had worked together.

It was very tense. You have Red Hat, SUSE and Canonical. They all are trying to play nicely together. Even though we were a pretty collaborative group, there were still tensions behind the scenes and people would go home after a conference. Executives would be like, “You can't do that with this company. You can't participate with that.” The Crowbar stuff was Dell and we couldn't get HP to come on board, even though it would have benefited them. That was very problematic. We ended up partnering Crowbar with SUSE. There are some interesting stories around CI/CD, an open source infrastructure for that. That meant it was very hard for Canonical and Red Hat to partner. Those challenges are very real.

This was an enormous project in that the technical direction was board led.

The board was told very firmly not to do the technical direction. There was a technical committee that was formed out of the projects and voted on by the community. The board was meant to do business and brand and stay out of the technical community. That created a lot more tension and was healthy. There were times when in the board, which is a lot of very technical people, we would say, "We think there's a problem here and the technical committee didn't want to hear suggestions from the board."

I wasn't experienced enough to navigate that as cleanly as I would have liked. I got very involved in trying to define the core of OpenStack so that you could certify that a vendor was informing OpenStack. We came up with a nice strategy of using the CI/CD infrastructure and testing frameworks to allow vendors to be self-certified. Kubernetes picked up the same strategy, but that took years of effort to have that type of conformance test.

There are similar processes that Red Hat has for certifying. They met the Kubernetes. That code that you've been working on followed you in your career. Is that fair to say?

We're in our fourth generation past Crowbar. Our current product is called Digital Rebar. None of the code has survived those transitions. The idea for Crowbar was super simple, but the execution has taken us a while to get right. I want to be able to write automation that works in every data center on every infrastructure. If I fix something, then I can apply that and fix it backward to all the other bytes that have been deployed. The snowflaking of automation is incredibly expensive to the industry. If you look at Ansible Galaxy or something like that where there are a hundred ways to do something simple, it's very hard to curate that.

If you take that Ansible playbook, you're not going to take patches from upstream anymore. It's very hard to stay in sync. When we were trying to do this for OpenStack, Hadoop, and a long time later, Kubernetes, the idea that if we improve the automation or process, you can't get the benefit of that because you've already deployed it. It would be like coding and saying, “Somebody patched the library log for JSA and I can't easily take the new version of the log for JSA because there aren't clean boundaries.”

That's the state of automation nowadays and many years ago when we started the Crowbar stuff. You have to think about what open source means and what collaboration means in these cases. If you're creating open source and it turns into a million snowflakes because nobody can keep using the code that you're writing, then that's a problem. That's the automation world. I believe the development world is not in the same state. Open source developments do a pretty good job of sharing libraries and making things pack.

You say it's in its fourth generation. That's unusual for engineering to go, “We tried this new thing.” It started with virtualization, then containerization. It seemingly needs to technically reinvent itself. As you described it, it sounds like quite a simple problem. On one level, it's a simple problem and on another, it's an impossible thing to solve.

What has happened over the years is that this very simple problem, we've realized we were solving it at the wrong layer. That meant we kept coming up the stack. We still see those people like, “I can just build a pixie server and get my whatever server is going. I can write a Bash script that puts Terraform and Ansible together, which we see all over the place,” which is true. Those individual actions are fine, but they miss this idea of collaboration, reuse and modularity.

What we've gotten to with the fourth generation is starting from this idea of infrastructure pipelines. What you're doing is you're saying, “I'm not replacing the tools, infrastructure or the servers I use. I want to get these people, tools and infrastructure that I have working in coordination so that they can then collaborate.” You need both. The infrastructure pipeline approach allows you to say, “My main goal is connecting the dots between all of these steps.”

This has been hard for us to be happening in open source. Open source does a good job of building a tool that solves a problem. Terraform is a great example of that. They also say, “I can build you an Amazon provider, but it's totally incompatible with a Microsoft or a Google provider.” They are the same tool but different providers. They are not interchangeable in any way, which I get. We look at that and say, “As an operator, I need to connect together all of these pieces. I need to do it in a way that if somebody fixes a Terraform integration or a bare metal bio setting integration, I can count on pulling that new one in because I didn't want to write it, and then reuse it like it was a code module.”

That took a very different set of thinking about how all these pieces work together. One of the things that's fascinating with Digital Rebar, and we realized this is in the 3rd and 4th generations, was that the automation needed to be immutable inside of the platform. The power of that means that you can now update patches and little snippets of the automation. We built this modular opposition system where you can say, "I'm bringing in my automation and my production system.” We use it even in our dev systems. It's locked in the system.

The amount of date full dynamic stuff is limited to the stuff that stayed phone dynamic like machines or sub-net ranges. Everything else is locked and immutable, then you can replicate and clone it. We're not worried about somebody hand tweaking a script because they want to fix something. This is important for the reward cycle. It rewards the, “I went and patched it, went through the build process, and then got back into an immutable artifact to put into my system.” We had to get to that point. There's a whole bunch of infrastructure support to build that type of pipeline and make it go.

Can you talk about where you're at now? What problem are you trying to solve? Is it for a software project like Flagsmith, Amazon or British Airways?

We’ve got a pretty broad range of customers who we’re dealing with. Some customers will buy it when they can stay in a trial version of ten servers if they want to. We always appreciate the small checks too, but we've got customers running. A couple of them are chasing 20,000 machines managed by Digital Rebar on a global scale. It is significant. The nice thing about what we've done is because Digital Rebar has this immutable automation component, every one of those sites can manage through a dev-test production process, and then replicate production sites. You can't do that much distributed multi-site infrastructure and have each one be unique. You have to have this repeatable process.

We don't care if it's hardware, virtual or cloud infrastructure. We're strapped backing those things out. It comes back to this collaboration idea. It's how you get teams. When I say teams, I mean our customer's teams but also our team, so they can call us up and say, "I need to add a new hardware type to the supported list for you. We don't want to maintain that. We want you to maintain that," which is perfectly reasonable. They shouldn't have to maintain those types of packages. It allows their teams to collaborate and reuse work.

Let me try and explain that clearly. In any operation, you typically have teams that have specialties. You might have an operations team or a platform team. You might have somebody who understands operating systems and pieces like that. The goal here is not to create full-stack engineers who have to do all of it. What you want to be able to do is segment the work and allow them to work on improving their part of that process. It’s a very clear way to hand off things between the two groups or upstream and downstream from them. It's a CI/CD pipeline but for infrastructure. What we've seen is as we focus on helping our customers connect teams together, it allows them to start small, but then expand what they've automated and get it more end to end.

One of the things that blew my mind when we started selling to enterprises with Flagsmith was some of our first enterprise customers signed the agreement after nine months of discussion, lawyers and stuff. Three days later, they started committing or submitting patches to components of ours. I never expected that to happen. They were like, “This is one of the reasons that we went with you guys was because we can do this.” Do you find that happens regularly with your customers?

It's not as much as I always have envisioned. We're an open source system. This is the balance with infrastructure automation, which I've been doing now for many years in open source areas, where people can contribute back. We talked about this. Ansible Galaxy has a way you can submit your playbooks. It's very hard to create a reused dynamic for people. I'm glad you're asking this question because I think about it a lot, how do we encourage our customers and our users to do things and then come back and modify parts of systems so that they can have a better experience? We do see it in things like, "I need more basic Linux to install pieces, adjust Kubernetes installs or add a feature into a Kubernetes process."

By and large, we do bring those back in for curation. I'm sure you all do too. Nobody wants their change to break everybody else changed. We have this problem in Crowbar a lot. SUSE would fix something for SUSE, and it would break Ubuntu and Red Hat. We would have to scramble around to clean it up. The curation aspect of what we do for automation is super important. The other thing we've done with the infrastructure pipelines. We call this a universal workflow. When we build these pipelines, they're all composed together out of modular units. Every modular unit has standard extension points in that unit.

It starts with a whole bunch of placeholders for, “I want to add new tasks. I want to do some conformance checks. I want to do some validation steps.” Every pipeline segment has those placeholders and they start empty, and then it has the work of the pipeline segment, then it follows up with the same things but as a wrap-up task. What that allows that’s so powerful is when we're dealing with a customer, they can take our stuff out of the box.

Everybody's operational environment is unique, distinct, special and normal. For my environment, I need to run this extra task. I need to get this additional piece of information. What we've done with the standard extension points is they can write a task that does that, add it into the process at a known extension point, and handle that by configuration.

This is one of the things about it. I've seen this with open source a lot. You end up with patches, “The OpenStack was littered with these. I have a patch to do the special thing that only I care about, but because of the way the system is designed, I have to give you a patch to fix it, then you're tearing your hair out, “Do I take that patch?” We got this from Oracle in the OpenStack days. They wanted to add a patch to fix something for Oracle that only Oracle cared about. Politics being politics, that was a huge fight. The problem was they had to submit a patch and have it accepted upstream to fix the behavior they needed.

That broke or frustrated everybody else. What we ended up doing to make all this stuff work is we had to say, “Everybody needs something unique. We don't want to curate what they don't want to expose or share.” We added places in the system to handle that heterogeneity and did it in a way that didn't break the curated stuff. That took us years of tuning. This is what you're talking about 4th-generation systems. You will need to do that type of work in something that had to come into the system and say, "I've got a completely standard process," and then it starts modifying. You were looking at this process like, "We have one standard process for provisioning in all the clouds." We're up to eight different clouds that we've standardized.

We use Terraform behind the scenes, which is great but it is different for every cloud. Every cloud has different needs and requirements. This lets us inject the variations for each system into a standard pipeline. When I show up, if a customer had their own custom thing, they could write their own wrapper, pipeline, own it and then extend it. We wouldn't have to curate it if they wanted. They could say, “I don't want to do it that way.” They don't have to submit it back.

Is that something that’s mostly version 3? It's amazing. Everyone has like, “I'm doing 99% the same as everyone else, but I'm doing 1%.” It's going to completely destroy your patch model. You have seen this even in the Linux kernel. I don't go read in the Linux kernel regularly but I've seen people write about it like, “We're running on this motherboard. It has broken something. This is the patch that unbreaks it,” which is not sustainable.

It's the reality. This was one of the hard things. I've been talking a ton about complexity and how you manage pain complexity. The first step to doing that is acknowledging that complexity is necessary. You have to come back and say, “It's not a defect that they had to do it differently. It is a reality I have to cope with.” How you do that has been part of what has taken generations of development.

I find it fascinating that the Google Cloud operates at an entirely different level of abstraction than AWS does. I don't think you ever see a subnet mask in Google Cloud. It's unusual that you ever see an IP addressing in Google Cloud, whereas with AWS, you can get into network and stuff that I have no idea about. I don't work at that level. I've found it fascinating. It's not talked about much that two huge companies with hugely intelligent teams building this stuff have come at the problem from very different perspectives in terms of where they sit in terms of abstraction. It's interesting as well that the current fashionable thing to say is that Kubernetes is too complicated. They didn't make it complicated just for the fun of it.

This is a case of necessary complexity. In the days when Kubernetes was coming together, we have Mesos on one side, which makes Kubernetes look like a Lego kit that you snapped together in 30 seconds, and then Docker Swarm, which is way too simple. Kubernetes was in a sweet spot from a complexity perspective. To me, the complexity doesn't come from the platform. It comes from us trying to use the platform in ways it wasn't designed.

Airbnb is using Kubernetes because they've got to serve 500,000 requests a second or whatever. That's a bad tool to pick up if you're running a two-person startup. We don't go near it when we can help it because I don't think it's designed for us.

We definitely could be served in the community to see alternatives that have simpler interfaces. For Kubernetes, some of it is going to be people writing interfaces in front of Kubernetes that create simplification. The idea that you have to have something like Istio in front of your cluster to make it at all workable is problematic. Some of that is learning how to build the tools. It could be that now that we understand the use cases, this is the challenge of generational tools. Now that you understand the use cases better, it could be that what we need out of Kubernetes is not right. Somebody is going to come up with the next generation of it. It keeps what they need and streamlines out the other pieces.

From my perspective, the operator pattern, which I look at equal storage procedures that are incredibly dangerous and hard to debug, but a lot of people love it. Red Hat has built huge products that are on the operator pattern. That's Kubernetes and it's not Kubernetes. It's completely another use of the APIs. I've seen people blogging and posting like, "It’s not about containers. It's about the CRD." You could split those into two products as we get to some point. That's the normal evolution of how these things work. It comes back to this idea of every time in a project, you get into, “I needed to add this to accomplish this one thing.” That becomes technical debt as it accumulates.

That's the complexity that I find more problematic. That's pretty comprehensible and it fits into a dev declarative infrastructure pattern. I think Kubernetes has been amazing for that and that's important. When I look from an infrastructure perspective and I start thinking about the benefits we get from open source built for backend and infrastructure more generally, the idea of the CI/CD pipeline coming back into automation breaks me as super helpful. We're starting to do the dev processes, move things through, normalize behaviors, and then we can check in code and make sure that it keeps working.

The open source infrastructure’s history is littered with companies and projects that people expected to claim this huge prize in terms of business success. It’s not necessarily financial success. Docker is a good example where I remember reading the first thing about Docker and thinking, "This is mind-blowing." There seems to be the repetitive story of people like me expecting, "This project is going to build around a $100 billion company.”

HashiCorp is one of the companies that I can think of that had gone against that. Google is printing money and they're not printing it by way of Kubernetes directly. Do the folks in your industry or section of the industry come around to the idea that it's a bit of an apocryphal tale and that you should avoid doing that? It feels like there are some Greek tragedies, Aesop's Fables or something.

Anybody using any system technology should understand where the people who are sustaining it get paid. For me, that doesn't matter if it's open source or proprietary software. You need to understand that sustaining the supply chain of your products, the things that you're using, and participate in that ecosystem. All along, a lot of us were saying, “Docker, you're doing all this stuff for free and we don't know how you're going to monetize.” As soon as they step into monetization, they would create a backlash on it culturally. Instead of having collaboration around Swarm, which they wanted to monetize, people said, “No, I'm going to keep taking this free stuff and then keep moving on.”

It's a huge challenge. There's a lot more reluctance broadly around the open core as a model because there's a little bit of a bait and switch. We're going to give giveaway stuff to you and then figure out how to limit you once you start needing it. For RackN, we are a closed core open ecosystem. That's more transparent. When you start putting things into production and leaning into how they work, then you need to understand how you're going to get support for that.

It could be that there's a great community and you don't need to worry about it. At the same time, if you're depending on something to work, you better be able to lean on it. That's what we found out when we were open core. We would get calls from brand name companies or the top companies in the world asking us to support our open source software, not having an interest in paying for it because they didn't want to get the approval to pay.

They were getting mad at us when it didn't work on server gear that we didn't have or test or anything like that. We'll submit a patch and it didn't work, and it didn't protect our other customers either. We have to be very careful when we look at how those things work. Open source collaboration and communities work well when there are well-defined APIs and ways that you can contribute, then have it tested and be part of that process. When things start hitting production infrastructure, it becomes much harder.

Was it natural to decide where that boundary lay when you flipped? One of the problems that we struggle with is some of the code that we write is like, “This is going to be ours. We're not going to give this away. We're trying to build a sustainable business,” and that's fine. If you have a problem with that, then fine. Others are much more on the fence. We've spent a lot of time thinking about this and it's frustrating when you get these natural features or bits of code that are obvious where they sit. You get the ones in the gray area, which are frustrating because you're never quite sure whether you've made the right call on it.

I had a lot of angst before we switched. There were a couple of things that have proven out. One is when we looked at where the commits were coming from for what we had, nobody was committing to the core pieces.

You mean outside of the team.

At the end of the day, we didn't want them to because of the risk of somebody putting in something that didn't work, we were still responsible for it. Our customers and community were asking to see the things that we had closed because they were not seeing that code was breaking them, “You did these things but I can't see how it works.” That reinforces how we looked at that switch. It came back to, where did we want collaboration? Where could people have useful input? Where did they need transparency in the operations?

Transparency isn't just being the commit. It could be seeing the commit history and seeing how we fixed the problem. Flipping things over from that perspective worked pretty well. It comes back to transparency and creating triggers where people know when they need to pay. It's hard on the other side because we get people who are like, “I don't want to pay for that.” They filter themselves out quickly or they are like, “I love it but I can't write a check for what you're doing.”

At least it's nice to have that conversation up first. It’s frustrating to have that conversation after somebody has been using your system for one year, and then you have to have the conversation that says, “I'm sorry. We're not going to keep helping you scale your system past 1,000 appointments.” I'm very sad about it. I wish the open source ecosystems were structured in a way that was very easy to convert to production and acknowledge payment but it isn't that way. The conversations after somebody is in production with you for free and then has to pay for it are unpleasant.

In terms of the future looking forward, do you see a paradigm shift in the same way that VMware and Docker? Is there a paradigm shift coming down the pipe? One of the things that I find fascinating about the area is that it constantly reinventing its technical foundations of itself. Do you wonder if it's ever going to end or will we ever go to a point where this is a generalist solution that works for 98% of folks?

I have become pretty reluctant to think that you're going to get that out of an open source community, especially at an integrative layer. At a tools layer, you might. Languages are totally different but from an infrastructure perspective, the amount of work it takes to build an integrative solution is very high. It's not something that communities do particularly well for the reasons we've discussed. Docker was giving away very expensive tooling and platforms for free. It wasn't a sustainable thing because people didn't pay for it out of the goodness of their hearts.

We're seeing that with Red Hat switching to CentOS Stream and saying, “If you want stable, that's a paid thing. We're not going to keep giving you stable free OS.” Alma and Rocky are much more transparent about saying, “There's a support model behind these operating systems that we expect people to pay into.”

The idea that there's going to be big free open source stuff, we've wised up for that. The idea that OpenStack was going to be downloaded and tried, and everybody has got a cloud now on their desktop, which was the vision. I was very excited about that. There are a lot more details and stuff that have to work all, fit together, be validated and checked for all that to happen. There's a flip side to this comment that’s maybe a darker side. I'm a big fan of the software. What RackN does is software. It's not a service. 

Software allows people to maintain a degree of control and autonomy in what they do. We're very busy doing as-a-service pieces. A lot of the open source stuff is being run behind the scenes for us in cloud and SaaS providers. It makes it easier. It's very operable and that's great but it's a black box. It's the opposite of open source where you’re losing visibility and control of your infrastructure and your autonomy in these cases. It was slowly leaking away from that perspective in the name of abuse.

As a software provider, it is easy to disseminate. I'm excited that people are starting to say, “I need software and I do need to make sure it's sustained and available.” At the same time, I'm worried that people are saying, “I'm tired of the cost and the effort that it takes to do that work. I'm going to keep handing it over to a service provider.” That's a lot of control.

These things are restless. They're not like lots of other parts of the industry that have settled on a model. It's happened to us where people are very accepting of the Van model because it's been a well-trodden path, especially around infrastructure. Because so much is at stake potentially, it makes that whole area a bit of a minefield.

We are very busy talking about a hybrid multi-cloud edge. There are a ton of things coming, choice variation and complexity. We've done a good job going through some important details here. Choice variation and complexities are very hard to accommodate on a code basis. We're reaching a point where we have to acknowledge that those are hard. They're making it possible for us to collaborate on things. We need to step back and figure out how do we do a better job collaborating there? When we talk to customers on this, they have a lot of great teams using good tools on great infrastructure. It's like, "My teams, tools and platforms are great. Everything is great. Why is it so hard?” They don't have ways to coordinate and collaborate between. It always comes back to the systems and people.

I'm glad that you came on to talk about it. I hope we haven't ended too much on a dark note. Thanks so much for your time. I look forward to seeing where things go.

Thank you. I appreciate the time.

About
Rob Hirschfeld

Rob Hirschfeld is the Founder and CEO of RackN, and has been involved in OpenStack since the earliest days, with a focus on ops and building the infrastructure that powers cloud and storage. He’s also co-Chair of the Kubernetes Cluster Ops SIG and a four term OpenStack board member.

The RackN team has deep knowledge of Kubernetes (deploying iton clouds and metal), OpenStack (created the Crowbar project), and cloud native architecture (migrated Digital Rebar to be micro-services). Rob has deep ops knowledge of both platforms AND experience with cloud native migrations. He’s also a regular speaker at OpenStack Summits about items including SDN, interop and running Kubernetes.

Available for talk, coaching and workshops on:

Subscribe

Learn more about CI/CD, AB Testing and all that great stuff

Success!
We'll keep you up to date with the latest Flagsmith news.
Must be a valid email
Illustration Letter