Developer Tooling, and Why It Matters for the Internet Computer by ICPipeline The Internet Computer Review Jul, 2022

Working on the Internet Computer has certainly increased our appreciation for things like Byzantine fault tolerance, orthogonal persistence, chain-key cryptography, threshold signing and the like. Without them there’d be no Internet Computer, and we’d all be busy with other things.

But those elements add up to something more basic. It’s about enabling developers to concentrate on what they’re building, rather than the infrastructure it will run on. It’s the basic concept that doesn’t always seem to cut through: that the Internet Computer is the decentralized cloud. It’s certainly been pointed out and it may seem altogether obvious. But to many folks — specifically relative newcomers to the space — that’s the bell-ringer. And it may be less obvious, to them, than we tend to realize.

To be sure, there are learning curves. New concepts and vocabulary, fundamentally different architecture, etc. Likewise, there are things that modern development teams are accustomed to, some of which are not here yet. But the message is that it’s all worth it, with interest. On the Internet Computer, today, we can build real applications: big data stacks, social platforms, transactional commerce solutions, you name it. We can build anything, more or less, and the numbers work, because we can do it with much less overhead than the Web 2.0 way.

So it will be crucial to have the tools that will enable developers to transition their creativity — seamlessly, with best practices and workflows generally intact — into this new realm. We are early adopters by nature, who have built teams and the environments that help them to succeed and thrive. So this thought process, of synthesizing ways to bridge gaps, is thoroughly baked into our collective mindset. Indeed, we talk about our tools as the bridges, buckets and shovels that Web 2.0 folk will use to mine (rather, validate) Web3 gold.

What You See Is What You Get

For starters, a cloud needs a console. Which needs to do more than provide a few shortcuts and take the edge off the command line. The console is that central place where we find all the tools, available in context and at-a-click. AWS’s mainstream adoption was massively enabled by their friendly, easy-to-use UI. Tools relating specifically to interconnectivity, hybridization and transition from our on-prem DCs were placed where we couldn’t miss them. When it came to making the journey as painless as possible, they did not miss a trick. They likewise authored, curated and delivered a colossal corpus of technical documentation, ranging from in-depth whitepapers to cheat-sheet cookbooks. In terms of IC priorities going forward, that is a topic unto itself, and we hope to contribute there as well.

Another historical analog is RedHat’s original Linux desktop, which was crucial in delivering Linux upstream in the enterprise space. Most of us know how it played out from there, in what had been such a Microsoft Server 20XX stronghold.

A non-linear, menu-driven UI is like a big window into unfamiliar territory. Information presented in context helps us to draw mental maps through steep learning curves. It also provides users, new and existing, with a place to congregate around solutions, methods and implementations that they may not have even been aware of. Communication on this level requires a place to go for answers and guidance that works in ways that developers are used to.

Getting From Here to There

Jack Dorsey is obviously a smart guy, who’s really thinking about the way forward. But his Web5 concept sounds pretty much like the Internet Computer. Which is already, um, extant. We aren’t the only ones to have made this observation.

But there is one particular aspect of the Web5 tag that rings very true for us: it’s the idea that 2 + 3 = 5. We’ve long thought that, in order for Web3 to really pop on a mainstream level, it must plug into the world around it. By employing Web 2.0 tools and platforms and solutions, as bridges into the Web3 space — effectively putting Web 2.0 to work for Web3 — we’ll be building something greater than the sum of its parts. To the extent that Web5 connotes that, we can only agree. We think the bridges-not-walls mindset, whatever it’s called or by whom, should drive our collective thinking. Because there’s every reason to expect, based on a long view of things, that the path to a singularity will lead through integration. We think this is not just a possibility; it’s necessary. Preferred, even.

Yes, along the way we’ll see more hybrids, plug-ins, connectors and transitional states than anyone can predict. They won’t all be pretty either. But history and experience have shown us that that’s OK. Quite recently cloud computing was new, and one didn’t need to be Nostradamus to see the potential there. We almost tend to forget that actually getting there, in real life, was never going to be easy.

Building Bridges, Not Jumping Off Cliffs

Perhaps you’ve been in the position of considering how good life would be, if only we could start with a clean whiteboard. But that’s generally not how life works. And when the stack is already big, diverse and sprawling — where real business is at risk, and even blips are high-cost events — it’s a very different story. And every shop has its own (oh-so deeply embedded) reasons why. We’ve been in our share of those tangles, and each situation is oddly unique, given the sameness of the underlying challenges (it’s the data, and the data, and …).

As mentioned, the commercial cloud providers have played things smartly, bending over backwards to provide us with the tools to sort things out. And there’s learned wisdom there, which should be in the mix as we navigate Web3’s maturation and mainstreaming curve. Deep integration into heterogeneous platform solutions? Yes. One-off, temporary bridges, to be burned once we’re across? Yes. We think we should grab hold of Web 2.0 resources when they put us closer to the goal, and that will come in all shapes and sizes.

It’s Not OK for Amazon — or Anyone Else — to Effectively Be the Internet.

To be clear, we’re not suggesting that we should scrap the entire decentralization ethos. We’re definitely not saying that. The last several years have been a contiguous object lesson in the dark side of centralized, leveraged big data. While decentralization won’t magically make it all better (indeed, on some levels it further complicates things), on balance we think it’s a better way forward. Outcomes that manifest our collective instincts and impulses, however imperfect, will be better than handing over the keys to Gordon Gecko. We’re just thinking about how we get there from here.

Web3 represents change of the good variety. Not unlike the first-generation cloud, it presents ways to massively reduce the sheer footprint involved in building and deploying digital product. That goes triple if we’re deploying at scale. This is the level that comes after virtualization and containerization. It’s a huge thing on a numbers level alone; and it’s another case where we’re not certain that it’s registering and resonating to the extent that it should.

We see our fellows here in this community building really excellent things. The IC ecosystem is full of good, smart people willing to share and extend a hand. And there’s so much wide-open problem space that it’s relatively easy to keep out of each other’s way. But the sheer scale of that collective backlog is also why we should all try to drive, maximize and multiply each other’s efforts. We want to help harness this energy and bring it to bear, serving as a distribution and delivery channel for everything from code linters to testing frameworks to identity stores to HA/DR solutions. We want ICPipeline to deliver tools created across the whole community, while also contributing orchestration logic between them.

So What Is ICPipeline’s Purpose?

We focus on fundamentals, things that help teams to work better together, and there are certain areas that we concentrate on.

We want to make it easy for, well, everyone to spin up IC-replica environments. We mean very easy, so that with just a few clicks, your IC project is cloned, built and deployed — to a dedicated, fully networked, persistent and highly available platform. And we maximize your levels of access and control, with sudo access down to the underlying Linux VM.

Another big target is on-demand archiving and snapshots of any canister’s state data. For disaster recovery; for moment-in-time rollbacks; for the ability to spawn prod data into dev/QA environments. We think this will be really valuable, and that the community will like it.

It’s about environments, and having the ability to spin them up on-demand. This is easier said than done, and we’ve found that to be true in all sorts of situations. From newly-built teams to even, sometimes, teams that are established and well-resourced. It’s like, there are QA environments, so to speak, and the QA team does test things there. But when quirks escape into prod it’s usually because what’s wrong with QA, and the person whose fault it is doesn’t work here anymore. Seriously, these systems do little to make things better, and much to make them worse.

So we like the idea of tapping into the IC’s native virtues, and making it so our users can spin up clones, of any branch of any repo of any IC project, with moment-in-time data control. It really appeals to us, we think other teams will agree, and we’ll have it very soon.

A CI/CD Pipeline Requires Sandboxes for the Team to Play In

We’re writing in reference to environments as they relate to the Internet Computer. Currently, there is no Internet Computer testnet. So the mainnet IC is the only option other than deploying locally on your laptop or desktop. So any unintended issues — relating to security or otherwise — are exposed to the world. ICPipeline has created its own autonomous system, essentially a hub and spokes. The hub is the ICPipeline Manager console, or ICPM. ICPM runs on-chain as a canister d’app, in a browser via its React frontend. From ICPM we manage environments that we call Replicators. Each Replicator is a fully functional Internet Computer development replica with access to the Linux command line, Webpack development server frontend, and the IC proxy direct to the local DFX replica. A Replicator can be equipped with a replica Internet Identity backend, and/or its own ledger canister — both are ready to go, you just need to check a box.

When the dev team has a tested build that’s ready for deployment, the process from that point should be consistent. We need to perform regression tests; log the who/what/when for all deployments; and generally minimize opportunities for human error. As teams grow (along with the project footprint), we need to be able to separate responsibilities, shifting deployments away from developers entirely toward deployment managers and documentation managers.

We take each of these inflection points as a call to action, and we want ICPipeline to answer all of them. How to implement blue-green deployments in my pipeline? How to integrate regression testing into my pipeline? How to integrate data migrations into my pipeline? For starters, we first need to have a pipeline. Enter ICPipeline 😉

It’s about the Data, Data, Data

As referenced above, we are also pursuing robust canister-state snapshot capability, for any IC canister d’app. ICPipeline makes it easy to deploy any code branch to a Replicator. Very soon you’ll be able to include a data snapshot taken from your production canisters, down to the moment in time — and spin it up with a click. It’s a really empowering capability, and even well-resourced shops can struggle to get there. It’s about catastrophes that don’t happen; hours not wasted; bugs that don’t surface in production; morale that doesn’t suffer, and talent that doesn’t leave out of frustration. They’re all non-events that don’t come with savings coupons attached. But when we see the bigger picture, and communicate the results, it’s beautiful when it comes together.

So you can set up a Replicator and get your project deployed. Then take a snapshot, make changes, do some testing, revert to that snapshot, make more changes, test, repeat until happy, then deploy to production. That’s a game changer.

Conclusion

The team at ICPipeline has been building software and running enterprise application platforms and teams for decades. We’re very excited by the in-built advantages that the Internet Computer presents for developers, teams and organizations. At the enterprise level, we consider the fortunes being invested in DevOps, SecOps, CloudOps, etc., most of which could go toward UX and product development. We get excited about the impact the IC can have. Decentralized cloud is a useful term for describing it, but its true potential actually goes beyond that.

We’ll continue collaborating on these articles as we go. We have a lot of pieces running around in our heads and in-progress, some of which require more explaining than others. We believe that ICPipeline — like the Internet Computer itself — represents a no-brainer value proposition. So we’ll keep going as fast as we can, while doing our best to let you know why.