Fueled by the urgency of staying ahead of the competition, hype cycles swirling around new technologies often overshadow actual business applications. Buzzwords like “AI” and “LLMs” swirl around promising revolution, but how do you untangle their potential from the hype? And if the cloud is the future, how do you avoid getting trapped in its vendor ecosystems? In this episode, mkdev co-founders Pablo and Kirill will tackle the tech buzzwords —AI, LLMs, DevOps, and cloud lock-in— to help us separate hype from business value.
Pablo Iñigo Sánchez and Kirill Shirinkin are two of the three co-founders of mkdev. As they recall, mkdev started as a B2C (business-to-consumer) coding platform, shifting and transitioning around three years ago towards a B2B (business-to-business) model.
As of today, mkdev offers consultancy services revolving around cloud technology, DevOps teams, architecting DevOps infrastructure, and conducting audits within that domain. Besides, following a noticeable shift in client demands in recent months, the company introduced a new division dedicated to AI-related services.
Managing hype cycles
Pablo notices that the increasing interest of companies in AI coincides with a lack of understanding of the desired outcomes or what AI technologies actually offer. “Most people, they don’t know what they want, or they don’t know what they can get,” Pablo says, adding that, despite AI’s popularity and the industry’s advocacy, “not everyone understands what they’re talking about.” Yet, despite the uncertainty about AI’s potential, the hype behind it feeds the urgency for adopting it, especially for startups.
In technology adoption, Pablo explains that “the biggest companies are super conservative… If they go to one technology, they stick with this technology for years; little companies, they can change easily because it’s super dynamic for them.” Sticking to a selected technology for extended periods, larger companies tend to carefully assess benefits, complexities, and financial implications before making substantial shifts. Conversely, startups, being more dynamic and having fewer entrenched processes, find it easier to pivot and adopt new technologies.
Pablo believes that one of the things that drives technology adoption is fear. To his understanding, fear disseminates top-down within organizations: higher-ups, like CEOs and CTOs, feel compelled to adopt certain technologies due to peer pressure and fear of losing a competitive edge by not adopting the technologies others are. Conferences and word-of-mouth discussions create a cascading effect within companies, leading to a widespread belief in the necessity of technologies, even without a clear business case.
Both Pablo and Kirill cautioned against the fear of missing out (FOMO) driving technology adoption, pointing out instances where individuals, influenced by blog posts or external discussions, hastily adopt new tools or technologies without necessarily considering their actual utility or prevalence within the industry. Their insights underscore the need for a balanced approach to technology adoption, advocating for cautious evaluation, questioning the necessity of new tools, and avoiding the herd mentality driven by hype cycles within the tech industry.
To demystify the hype surrounding new technologies, Kirill explains that mkdev guides its customers through an evaluation process before adopting new technologies and facilitates them with resources so they can have a better grasp of what these technologies involve.
Withal, Kirill stresses the need to be conservative yet open to innovation and recommends questioning the necessity of a tool and understanding its relevance to specific needs rather than succumbing to the allure of innovation for its own sake: “My personal sense is always like I’m trying to be as conservative as possible and not fight innovation, but always to ask why.”
Integration and business value of LLMs
Integrating LLMs into the company is not just having access to cutting-edge models but also identifying practical business applications for these models to create meaningful value. In this regard, Kirill highlights the integration of LLMs into business by bringing up Amazon Bedrock, which, rather than aiming to replace models like ChatGPT, wraps around multiple models. Bedrock actively integrates close and open-source models from various sources allowing users to select, fine-tune, and work with them. To Kirill, Bedrock’s approach implies that no single company, even one as prominent as OpenAI, can singularly drive progress in LLMs.
Pablo, for his part, believes that while some are publicly available and customizable, certain specialized models, like GPT-4, hold a prominent position in the market and are sought due to their prominence and proven capabilities. Yet, Pablo remarks on the importance of offering viable business cases for utilizing these large language models. Merely having access to the latest models isn’t enough. Instead of chasing what’s popular or technically advanced, the goal should be to align these models with real business needs: “Today, many companies want to have GPT because it’s there… It’s, ’I need this huge model that is the best one today,” Pablo says.
LLM in the cloud vs on-prem
The option for companies to train and host LLMs themselves pales in comparison with what vendors offer. GPT-4’s strength, brings up Kirill, lies in the extensive duration and resources invested in its training—up to six months to a year with vast amounts of data, a feat challenging to replicate.
The business models of cloud providers often revolve around providing and maintaining such infrastructure, making them a feasible option for organizations considering the utilization of these resource-intensive models. In turn, both training and hosting LLMs demand an investment that doesn’t justify except in very particular cases. Utilization often requires dedicated resources.
DevOps and LLM/AI
Looking into the potential convergence between DevOps and AI domains, Pablo envisions a scenario where the capabilities of natural language processing and AI could lead to a transformative shift in repetitive DevOps tasks. He suggests that a natural language-connected system, linked to various agents and proficient in understanding diverse cases due to extensive training could potentially substitute routine DevOps work. While acknowledging that individuals would still be involved in coding and preparing these models, Pablo believes that the repetitive nature of many DevOps tasks, such as deploying infrastructure components repeatedly across different companies, could be automated using autonomous components communicating with models and executing tasks.
On the other hand, Kirill highlighted specific AI use cases within the DevOps sphere, such as observability enhancements offered by tools like Datadog and New Relic. These tools utilize AI to automate tasks like setting alarms or generating queries from plain English descriptions for log analysis. However, Kirill isn’t sure that language models alone could entirely replace infrastructure evolution. While language models like GPT-4 might excel in certain coding tasks, they might lack awareness of newer developments, being that the advancement of infrastructure has been more about building better abstractions atop the infrastructure rather than being predominantly AI-driven.
Kubernetes’ essentiality and complexity
Pablo and Kirill offer differing perspectives on Kubernetes’ role in the cloud landscape. While Pablo sees Kubernetes as a necessary tool for scaling and managing complex container-based solutions, Kirill challenges its suitability in the cloud environment, emphasizing the potential complexity it adds without commensurate advantages in certain scenarios.
Pablo acknowledges the growing popularity of cloud-specific abstractions like Cloud Run (Google Cloud) and Container App (Azure), highlighting their simplicity and efficiency in deploying applications, particularly for smaller or front-end-focused projects. However, he emphasizes that when scaling or transitioning to more complex container-based solutions, Kubernetes becomes essential and isn’t likely to fade away from the cloud architecture landscape for the foreseeable future: “Everything depends… But as soon as you want to jump from this simplicity, you need Kubernetes.”
On the contrary, Kirill suggests that Kubernetes might not be the optimal solution for cloud environments due to the added layer of complexity it introduces. The potential cumbersomeness of Kubernetes adds another management layer on top of the underlying compute resources (such as EC2 in the case of Amazon or a compute engine for GCP). To him, “the power of Kubernetes” resides in its standardizing force, akin to how the Ruby on Rails application structure allows developers to navigate and understand other Ruby on Rails applications.
Navigating cloud vendor lock-in: between practical realities and open standards
Whether through proprietary technologies, deep integrations, or contractual penalties, vendor lock-in occurs when a business becomes excessively reliant on a single provider. In the case of companies relying on a single cloud provider, they face significant challenges and costs if they try to switch to another provider.
Kirill understands that while vendor lock-in is a theoretical concern, it might only manifest practically for some companies. Some individuals prioritize avoiding vendor lock-in, but to others, it might not impact their day-to-day operations.
Pablo, in turn, based on experiences with migration between cloud platforms, mentions instances where migrations from AWS or Azure to Google Cloud, primarily driven by financial incentives, take place only as offers of free services initially or financial support for migration costs are on the table. “As soon as you go to the cloud, it’s like an octopus,” says Pablo, pointing out how cloud providers continuously introduce new components and services, making it challenging to detach once integrated fully into their ecosystem.
Kirill adds to this perspective, noting the emergence of a different form of lock-in when companies aim to avoid vendor lock-in by adopting open standards and cloud-native approaches. This path might lead to dependencies on specific open-source technologies and different tools, courses, and consultants, constituting a form of lock-in within the open standards sphere.
The bottom line
Dive deeper into mkdev via its YouTube channel and visit its website. Additionally, mkdev offers various resources, such as mkdev dispatch bi-weekly newsletter, a monthly podcast on DevOps and cloud topics, and webinars, including Pablo’s latest webinar on how to interconnect Cloud Run with a database in Google Cloud.