How Llms Impact Software Development

Explore top LinkedIn content from expert professionals.

Summary

Large language models (LLMs) are revolutionizing software development by shifting traditional paradigms. From generating and verifying code to acting as interactive agents, LLMs transform how developers interact with software, making processes more efficient and enabling new possibilities like partial automation and agent-based systems.

  • Focus on automation: Use LLMs as tools to automate repetitive tasks such as code reviews, debugging, documentation, and workflow management, allowing developers to focus on more complex problems.
  • Adapt to new paradigms: Shift from traditional coding to utilizing natural language prompts as programming instructions, enabling faster iterations and broader accessibility to software development.
  • Design for collaboration: Create systems that integrate human-in-the-loop workflows and agent-readable interfaces to enhance interaction between users, AI agents, and software.
Summarized by AI based on LinkedIn member posts
  • View profile for Abhi Khadilkar

    Founder, Spearhead | Applied AI Strategy & Systems for COOs, CIOs & CCOs

    12,692 followers

    Most companies are in between Software 1.0 and 2.0. Thanks to AI, Software 3.0 has arrived. (download 72 page slidedeck below) Andrej Karpathy’s recent talk at Y Combinator's AI Startup School introduces a concept that every tech executive should sit with: Software 3.0. Where Software 1.0 was about handcrafting logic, and Software 2.0 involved neural networks as black-box classifiers, Software 3.0 treats prompts as programs and LLMs as general-purpose computing substrates. This is the next substrate shift in software. The equivalent of mainframes → PCs → cloud → AI-native systems. First, let us review the Software 3.0 paradigm's 4 areas: 1. LLMs are the new operating systems, not just tools. They are: + Utilities (serving computation in the flow of work), + Fabs (mass-producing "digital artifacts" via generative interfaces), and OSes (abstracting complexity, orchestrating context, managing memory and interface). + The right way to view this is not "plug in an LLM." It is: what would a system look like if an LLM was your system's OS? 2. We’re entering the age of partial autonomy. Karpathy makes a compelling analogy to the Iron Man suit: + Augmentation: LLMs extend human capability (autocomplete, summarization, brainstorming). + Autonomy: LLMs act independently in constrained environments (agent loops, retrieval systems, workflow automation). + This leads to the concept of Autonomy Sliders — tuning systems from fully manual to semi-automated to agentic, depending on risk tolerance, verification requirements, and task criticality. 3. The Generator-Verifier loop is the new core of development. + Instead of "write → run → debug," think: Prompt → Generate → Verify → Refine + Shorter loops, faster iterations, and critical human-in-the-loop checkpoints. Reliability comes from verification, not perfection — a major shift for teams used to deterministic systems. 4. Architect for Agents, not just Users. + Your software doesn’t just serve end users anymore — it must now serve agents. These digital workers interact with your APIs, documentation, and UIs in fundamentally different ways. + Karpathy calls for a new class of developer experience: llms.txt instead of robots.txt + Agent-readable docs, schema-first interfaces, and fine-tuned orchestration layers. Some implications for AI implementations: A. Because of Software 3.0, enterprise architecture will evolve: traditional deterministic systems alongside generative, agentic infrastructure. B. AI Governance must span both. C. Investments in data pipelines, prompt systems, and verification workflows will be as important as microservices and DevOps were in the previous era. D. Your talent model must evolve: think AI Engineers not just Prompt Engineers, blending deep system knowledge with model-first programming. E. You’ll need a new playbook for build vs. integrate: when to wrap traditional software with LLMs vs. re-architect for Software 3.0 natively? What are your thoughts about Software 3.0?

  • View profile for Pradeep Sanyal

    Enterprise AI Leader | Former CIO & CTO | Chief AI Officer (Advisory) | Data & AI Strategy → Implementation | 0→1 Product Launch

    19,192 followers

    LLMs are the new operating systems. And we’re all programming them in English. Software is undergoing a once-in-a-generation rewrite. Not just in what we build, but how. Andrej Karpathy’s recent talk at AI Startup School lays it out clearly: we’ve gone from → Software 1.0 (explicit logic) → Software 2.0 (neural nets with learned parameters) → to Software 3.0 (LLMs, programmable via English prompts). This isn’t just a clever metaphor. It’s a full-blown platform shift. “LLMs are utilities. LLMs are fabs. LLMs are operating systems.” And if that’s true, then today’s apps aren’t just software, they're the new UX layer for partial autonomy. Here’s what’s changing and what it means: 🔹 𝐏𝐫𝐨𝐦𝐩𝐭 = 𝐏𝐫𝐨𝐠𝐫𝐚𝐦 You don’t code anymore. You instruct. The syntax is natural language, the compiler is stochastic, and the runtime is probabilistic. Anyone who can think clearly can now build software. 🔹 𝐂𝐮𝐫𝐬𝐨𝐫 𝐚𝐧𝐝 𝐏𝐞𝐫𝐩𝐥𝐞𝐱𝐢𝐭𝐲 = 𝐄𝐚𝐫𝐥𝐲 𝐋𝐋𝐌-𝐧𝐚𝐭𝐢𝐯𝐞 𝐚𝐩𝐩𝐬 These apps don’t just call LLMs, they’re orchestrators. They manage context, layer GUIs for human verification, and offer autonomy sliders that let you decide how much control to cede. 🔹 𝐄𝐯𝐞𝐫𝐲 𝐚𝐩𝐩 𝐰𝐢𝐥𝐥 𝐡𝐚𝐯𝐞 𝐚𝐧 𝐚𝐮𝐭𝐨𝐧𝐨𝐦𝐲 𝐬𝐥𝐢𝐝𝐞𝐫 Like Iron Man suits, not Iron Man robots. We’re building augmentations, not agents. Yet. Keep the AI on a leash. Make the human-in-the-loop loop fast. 🔹 𝐖𝐞’𝐫𝐞 𝐛𝐚𝐜𝐤 𝐢𝐧 𝐭𝐡𝐞 1960𝐬 𝐨𝐟 𝐜𝐨𝐦𝐩𝐮𝐭𝐢𝐧𝐠 Time-sharing. Centralized compute. Batched queries. LLMs aren’t personal yet. We interact with them like dumb terminals plugged into a smart mainframe. That’ll change, but not tomorrow. 🔹 𝐃𝐨𝐜𝐬 𝐚𝐫𝐞 𝐟𝐨𝐫 𝐡𝐮𝐦𝐚𝐧𝐬. 𝐈𝐭’𝐬 𝐭𝐢𝐦𝐞 𝐭𝐨 𝐰𝐫𝐢𝐭𝐞 𝐟𝐨𝐫 𝐚𝐠𝐞𝐧𝐭𝐬. APIs were for programs. GUIs were for users. LLMs are a third interface type. We need llm.txt, Markdown-first docs, and agent-readable formats. Tools like DeepWiki and get.ingest are leading indicators. 🔹 𝐋𝐋𝐌𝐬 𝐡𝐚𝐯𝐞 𝐩𝐬𝐲𝐜𝐡𝐨𝐥𝐨𝐠𝐲 They're not machines. They simulate people. They’re savants with amnesia. Superhuman in some domains, clueless in others. We must learn to collaborate - without over-trusting. Why this matters for you: If you’re building software, stop thinking in code. Start thinking in agent affordances, prompt interfaces, and generation-verification loops. If you're an enterprise leader, don’t just “adopt AI.” Redesign your architecture to accommodate software that thinks, apps that adapt, and users that co-pilot. And if you’re in product, remember: partial autonomy will eat the GUI. The new UX isn't just visual. It's conversational, stochastic, and deeply probabilistic. “The future is less about programming computers, more about negotiating with them.” Build for people spirits. Design for GUIs and agents. And always, always audit the diff.

  • View profile for Patrick Salyer

    Partner at Mayfield (AI & Enterprise); Previous CEO at Gigya

    8,325 followers

    Software just went through its second great refactor due to AI. Andrej Karpathy's (x OpenAI, Tesla, Stanford PHD) keynote -  “Software Is Changing (Again)” - is a must watch for founders building AI agents; here are five takeaways: 1. Natural language is the new IDE. Prompts are code; context windows are memory. Build for English-in, action-out. 2. Partial autonomy beats full autonomy. Cursor thrives because it keeps humans in the loop, orchestrates many model calls, and surfaces a clean GUI for fast verification. Ship the Iron Man suit before the autonomous robot. 3. Meet the model halfway.  Model output is untrusted code. Ship an eval/CI loop, tests and guardrails; your product’s value is in verifying, not generating. 4. LLMs resemble operating systems and utilities. Your moat will come from workflow ownership and data, not model access. 5. This is the decade—not the year—of agents. Autonomy at Tesla teaches patience: perfect demos arrive long before dependable systems. Plan for a marathon, not a sprint. Bottom line: Software + AI is eating the stack. Founders who pair LLM superpowers with pragmatic UX, tight verification loops, and controlled autonomy will define the next enterprise wave.

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    49,867 followers

    Large Language Models (LLMs) possess vast capabilities that extend far beyond conversational AI, and companies are actively exploring their potential. In a recent tech blog, engineers at Faire share how they’re leveraging LLMs to automate key aspects of code reviews, unlocking new ways to enhance developer productivity. At Faire, code reviews are an essential part of the development process. While some aspects require deep project context, many follow standard best practices that do not. These include enforcing clear titles and descriptions, ensuring sufficient test coverage, adhering to style guides, and detecting backward-incompatible changes. LLMs are particularly well-suited for handling these routine review tasks. With access to relevant pull request data—such as metadata, diffs, build logs, and test coverage reports—LLMs can efficiently flag potential issues, suggest improvements, and even automate fixes for simple problems. To facilitate this, the team leveraged an internally developed LLM orchestrator service called Fairey to streamline AI-powered code reviews. Fairey processes chat-based requests by breaking them down into structured steps, such as calling an LLM model, retrieving necessary context, and executing functions. It integrates seamlessly with OpenAI’s Assistants API, allowing engineers to fine-tune assistant behavior and incorporate capabilities like Retrieval-Augmented Generation (RAG). This approach enhances accuracy, ensures context awareness, and makes AI-driven reviews genuinely useful to developers. By applying LLMs in code reviews, Faire demonstrates how AI can enhance developer workflows, boosting efficiency while maintaining high code quality. As companies continue exploring AI applications beyond chat, tools like Fairey provide a glimpse into the future of intelligent software development. #Machinelearning #Artificialintelligence #AI #LLM #codereview #Productivity #SnacksWeeklyonDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/deaMsxZy 

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | Generative AI | Agentic AI

    691,704 followers

    Large Language Models (LLMs) are powerful, but how we 𝗮𝘂𝗴𝗺𝗲𝗻𝘁, 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗮𝗻𝗱 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗲 them truly defines their impact. Here's a simple yet powerful breakdown of how AI systems are evolving: 𝟭. 𝗟𝗟𝗠 (𝗕𝗮𝘀𝗶𝗰 𝗣𝗿𝗼𝗺𝗽𝘁 → 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲)   ↳ This is where it all started. You give a prompt, and the model predicts the next tokens. It's useful — but limited. No memory. No tools. Just raw prediction. 𝟮. 𝗥𝗔𝗚 (𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻)   ↳ A significant leap forward. Instead of relying only on the LLM’s training, we 𝗿𝗲𝘁𝗿𝗶𝗲𝘃𝗲 𝗿𝗲𝗹𝗲𝘃𝗮𝗻𝘁 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗳𝗿𝗼𝗺 𝗲𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝘀𝗼𝘂𝗿𝗰𝗲𝘀 (like vector databases). The model then crafts a much more relevant, grounded response.   This is the backbone of many current AI search and chatbot applications. 𝟯. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗟𝗟𝗠𝘀 (𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 + 𝗧𝗼𝗼𝗹 𝗨𝘀𝗲)   ↳ Now we’re entering a new era. Agent-based systems don’t just answer — they think, plan, retrieve, loop, and act.   They: - Use 𝘁𝗼𝗼𝗹𝘀 (APIs, search, code) - Access 𝗺𝗲𝗺𝗼𝗿𝘆 - Apply 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗰𝗵𝗮𝗶𝗻𝘀 - And most importantly, 𝗱𝗲𝗰𝗶𝗱𝗲 𝘄𝗵𝗮𝘁 𝘁𝗼 𝗱𝗼 𝗻𝗲𝘅𝘁 These architectures are foundational for building 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝗮𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁𝘀, 𝗰𝗼𝗽𝗶𝗹𝗼𝘁𝘀, 𝗮𝗻𝗱 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗲𝗿𝘀. The future is not just about 𝘸𝘩𝘢𝘵 the model knows, but 𝘩𝘰𝘸 it operates. If you're building in this space — RAG and Agent architectures are where the real innovation is happening.

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,583 followers

    "This report covers findings from 19 semi-structured interviews with self-identified LLM power users, conducted between April and July of 2024. Power users are distinct from frontier AI developers: they are sophisticated or enthusiastic early adopters of LLM technology in their lines of work, but do not necessarily represent the pinnacle of what is possible with a dedicated focus on LLM development. Nevertheless, their embedding across a range of roles and industries makes them excellently placed to appreciate where deployment of LLMs create value, and what the strengths and limitations of them are for their various use cases.  ... Use cases We identified eight broad categories of use case, namely: - Information gathering and advanced search - Summarizing information - Explaining information and concepts - Writing - Chatbots and customer service agents - Coding - code generation, debugging/troubleshooting, cleaning and documentation - Idea generation - Categorization, sentiment analysis, and other analytics ... In terms of how interviewees now approached their work (vs. before the advent of LLMs), common themes were: - For coders, less reliance upon forums, searching, and asking questions of others when dealing with bugs - A shift from more traditional search processes to one that uses an LLM as a first port of call - Using an LLM to brainstorm ideas and consider different solutions to problems as a first step - Some workflows are affected by virtue of using proprietary tools within a company that reportedly involve LLMs (e.g., to aid customer service assistants, deal with customer queries) ... Most respondents had not developed or did not use fully automated LLM-based pipelines, with humans still ‘in the loop’. The greatest indications of automation were in customer service oriented roles, and interviewees in this sector expected large changes and possible job loss as a result of LLMs. Several interviewees felt that junior, gig, and freelance roles were most at risk from LLMs ... These interviews reveal that LLM power users primarily employed the technology for core tasks such as information gathering, writing, and coding assistance, with the most advanced applications coming from those with coding backgrounds. Although users reported significant productivity gains, they usually maintained human oversight due to concerns about accuracy and hallucinations. The findings suggest LLMs were primarily being used as sophisticated assistants rather than autonomous replacements, but many interviewees remained concerned that their jobs might be at risk or dramatically changed with improvements to or wider adoption of LLMs. By Jamie Elsey Willem Sleegers David Moss Rethink Priorities

Explore categories