Yesterday, I had an insightful conversation with a seasoned software product leader, and one phrase stuck with me: Code is liability. At first, it sounds counterintuitive. We often think of code as an asset—something that brings value to a company. But the reality is that every line of code written comes with inherent costs and risks. Here’s why: 1. Maintenance Burden – Code isn’t a one-time investment. Every feature added increases the surface area for bugs, security vulnerabilities, and technical debt. The more code you have, the more effort it takes to maintain. 2. Complexity & Fragility – The more code you write, the harder it becomes to make changes without breaking something else. What starts as a simple solution can quickly turn into a tangled mess requiring extensive rework. 3. Scalability Risks – As software evolves, poorly designed or unnecessary code can bottleneck performance. What works today may slow you down tomorrow, requiring costly refactoring or complete rewrites. 4. Opportunity Cost – Time spent managing and debugging bloated codebases is time not spent on innovation. The best software companies minimize unnecessary code and focus on delivering value efficiently. 5. Security Vulnerabilities – Every additional line of code is a potential attack vector. The larger the codebase, the more opportunities for exploits. This conversation reinforced something I’ve seen firsthand: The best engineers and product leaders aren’t the ones who write the most code—they’re the ones who write the least necessary code. In a world where we celebrate shipping new features, we often overlook the cost of what we’ve built. Sometimes, the best decision isn’t to add more—it’s to simplify, refactor, or even delete.
The Importance of Software Liability
Explore top LinkedIn content from expert professionals.
Summary
The importance of software liability lies in ensuring that developers, companies, and organizations are held accountable for the safety, security, and functionality of their digital products. As we increasingly rely on software and AI systems in critical areas like healthcare, transportation, and national security, defining and upholding liability becomes essential for fostering trust, mitigating risks, and protecting users from harm.
- Emphasize accountability frameworks: Clearly define who is responsible for issues that arise from software failures or security vulnerabilities to establish trust with customers and partners.
- Prioritize secure design: Incorporate strong cybersecurity measures and testing during the software development process to prevent risks and manage liability.
- Plan for ongoing updates: Ensure that software maintenance includes timely updates and patches for both functionality and safety to reduce potential liabilities over time.
-
-
Code can automate decisions, but not responsibility. This distinction will determine which AI companies survive the next decade. As AI agents become more autonomous, I've noticed an interesting pattern: the more advanced the system, the more crucial the accountability framework becomes. Contract law wasn't designed for robots. It emerged from centuries of human commercial relationships, centered on a simple principle: when things go wrong, someone specific must be accountable. Even the most sophisticated agentic AI exists within this reality. While algorithms make decisions, liability still flows to identifiable entities—companies, executives, developers, operators. This isn't a limitation, it's a feature. I've watched enterprise AI deployments stall not because of technical issues, but because no one could answer the simple question: "Who's responsible when this fails?" The companies winning major contracts aren't those promising to remove humans entirely. They're the ones who've thoughtfully designed where and how humans remain accountable within their AI systems. Some founders view liability as friction to be engineered away. The successful ones recognize it as the foundation of customer trust. Consider: Financial institutions won't adopt AI that can't trace decisions to accountable parties. Healthcare providers require clear liability chains. Government contracts demand specific responsible entities. Where technology meets commerce, accountability isn't negotiable. This creates a counter-intuitive advantage for founders building AI companies: clarity about responsibility accelerates adoption. Well-defined liability frameworks reduce perceived risk. Transparent accountability protocols build institutional trust. Responsibility frameworks aren't limitations on AI—they're the foundations that make widespread business adoption possible. The capital-labor equation will continue shifting with AI advancement. But indemnity, liability, and accountability will remain firmly anchored to humans and the organizations they create. Business is fundamentally about creating accountability structures that enable valuable risk-taking. The most successful AI founders aren't those trying to eliminate human responsibility—they're the ones designing optimal interfaces between algorithmic capability and human accountability. #startups #founders #growth #ai
-
In the rush to adopt AI agents, we may be forgetting an old lesson: You can outsource the work, but you're not able to outsource the ultimate responsibility for outcomes. That’s where cybersecurity risk might explode. Let me explain: I remember the early days of business process re-engineering and some of the first waves of business process automation and globalization that ultimately led to massive changes in the way work is done and when and where it was done. Offshore manufacturing and redistribution of work to low-cost places were all byproducts of these early efforts. With these changes came a requirement to rethink the whole notion of ‘who is responsible?’ when things don’t turn out as expected. Throughout my career, one of the most predictable thorny contract issues when dealing with business partners (as a supplier or as a provider) was the ”limits on liability” clause in contracts. This is the intersection where risk and responsibility collide, and this includes cybersecurity risk. And the answer often was a compromise between the parties, where the business value of the transaction determined the upper and lower limits of liability. Many are rushing to plug AI into their business, but most haven’t asked the critical question: Who (or what) is responsible when something goes wrong? - Who created it (including all of the sub-creators)? - Who quality checks it (if that is done at all)? - How was the AI trained and on what data? - What data and metadata are being used, collected, stored, and shared? - What is the potential for harm? (This may be the most important question!) Here are some examples where AI is being used (early days but often with great success) where there is a potential for great harm: - Interpretation of medical scans - Autonomous vehicles - Robotic assembly and repair - Customer Service Software license agreements often contain boilerplate clauses that contractually eliminate any liability for harm caused by defects in the software. This is very unique to the software industry and has almost no equivalent in any other field. With Agentic AI taking on more and more of the work formerly done by humans, I believe there is a need for basic product liability coverage (as a matter of law) for software products in general and agentic AI in particular. It might lead to answering some of the key questions above in advance of disasters that are likely to come.
-
On December 8, 2024, the EU’s new Product Liability Directive (PLD) came into force, with its provisions set to apply fully to products placed on the market after 9 December 2026. The revised PLD has significant implications for AI. The Directive explicitly includes AI systems under its scope, to hold manufacturers liable for defects in AI applications, operating systems, or machine learning-enabled systems. The directive also extends liability to cover defects arising from updates, upgrades, or learning-based modifications made after release, addressing the evolving nature of AI technologies. Links: - European Commission Overview: https://lnkd.in/gn7yC6Cb - Text: https://lnkd.in/gh495jww * * * Who is in Scope? All economic operators involved in the design, manufacture, production, import, distribution, or substantial modification of products, including software and components, in the course of a commercial activity. This includes manufacturers, authorised representatives, importers, fulfilment service providers, and distributors. The Directive explicitly includes: - Products: Tangible goods, digital manufacturing files, software (e.g., AI systems), raw materials, and related services integrated into products. - Substantial Modifiers: Those who make significant modifications to products after their initial placement on the market. When Does It Apply to American Organizations? Any non-EU manufacturer or economic operator whose products or components are imported or made available in the EU market falls under this Directive. This includes: - American companies exporting to the EU. - Entities providing software, digital manufacturing files, or integrated services for products sold or distributed in the EU. * * * Key Points on the Product Liability Directive (EU) 2024/2853 Liability is strict (no-fault-based) and applies to all products, including software and AI systems integrated into or controlling tangible goods. Specific Inclusions: - Software is treated as a product if supplied in the course of commercial activity, regardless of how it is delivered (e.g., SaaS, cloud, or installed on devices). - AI providers are treated as manufacturers under the Directive. - Digital manufacturing files and integrated services (e.g., AI services enabling product functionality) are also in scope. Exemptions: - Free and open-source software is exempt unless distributed in the course of commercial activity. - Personal-use property and purely informational content are excluded. Manufacturer’s Responsibilities: - Includes liability for cybersecurity vulnerabilities. - Requires maintenance of software updates for safety but not necessarily functional updates.
-
Software Liability Often referred to as the "third rail of cybersecurity policy" Yesterday in a testimony on the Chinese Community Party Cyber Threat to American National Security, Jen Easterly emphasized the need for "Software Liability" That is, the ability to hold software and product vendors liable for not taking reasonable precautions to secure their products. Currently, the operating model is for software and technology vendors to externalize those costs onto consumers, customers and citizens. Jen emphasized how decades of prioritizing incentives such as speed to market over security have jeopardized our national security, as we sit atop a fragile and incredibly vulnerable digital society. There's been renewed calls for Software Liability, including an emphasis in the latest National Cybersecurity Strategy. But what would a software liability actually look like in practice? James Dempsey, one of the most competent people on the topic recently published an excellent paper titled "Standards for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor" It was published as part of The Lawfare Institute's "Secure-by-Design" project. In the paper, Jim discusses: - The problem to be solved - Warrant vs. Negligence, vs. Products Liability vs. Certification: The Question is the Same - Existing Software Standards Focus on the Process, Not the Product - The Floor: Lessons from Other Fields - The Software Liability Floor: Compiling Definitive Software Do's and Don'ts If you're interested in the concept of software liability, and a peek at what evolving legal and regulatory approaches may look like, I strongly recommend checking out this paper. Whether you're producing or consuming software, this is going to be increasingly relevant, as the calls for addressing cyber policies third rail continue to grow. Decades of software and technology vendors externalizing the costs is now a national security threat due to our modern society being driven by software from consumer goods, critical infrastructure and national security systems. #cybersecurity #software #ciso