The ability to update software on devices is a valuable tool for protecting critical systems from evolving threats. However, this capability is not without risk. There have been an alarming number of vulnerabilities that were introduced through a malicious software patch or a flaw in the update process. New software update frameworks have been developed to mitigate this risk, but they come with new levels of complexity, and they may not work on segmented network architectures or be suitable for embedded devices. Brian Romansky focuses on TUF (The Update Framework), a software update approach that addresses many common vulnerabilities and consider how it can be applied in a critical infrastructure environment. It is compared against SUIT (Software Update for IoT) and UpKit, two alternative structures that are intended for use on embedded systems. Attack trees are used to compare these models and visually explain the strengths and challenges that may be encountered when they are applied in a network that follows the Purdue or ISA-99/IEC 62443 network architecture. The role of metadata such as an SBOM and vendor test results are also be considered. These concepts are merged to re-cast software updates into the context of an integrated supply-chain and configuration management system.
Automated Picking Systems
Explore top LinkedIn content from expert professionals.
-
-
Today, B2B SaaS products perform impressively in isolation, providing functionality, efficiency and productivity gains. But they don’t play well with others. Vendors know they need to offer a wide set of native integrations, but that’s getting harder to achieve. As the B2B tech stack swells (the average business uses 371 SaaS apps), the number of integrations vendors need to build is skyrocketing. In the coming decade, this problem will increase even further as B2B software will operate across thousands of highly specialized applications. These systems won’t just coexist, they’ll need to interoperate in real time, across dynamic, evolving workflows. Current SaaS architectures struggle with integration complexity. Fragmented stacks, ad hoc APIs, and manual workarounds introduce bottlenecks at scale. To fully unlock the value of SaaS, vendors require infrastructure that abstracts the burden of bespoke integration development. Legacy solutions fall short: Embedded iPaaS enables point-to-point connectivity but lacks scalability and maintainability. Unified APIs offer abstraction, but constrain customization and depth of integration due to rigid schemas. What’s needed is a universal, API-agnostic integration layer, one that enables composable, reusable logic across heterogeneous systems at scale with hundreds of apps. At Integration App, we’re building exactly that. Our platform introduces a standardized integration framework that decouples integration logic from underlying APIs. Using AI, we generate adaptive, app- and tenant-specific implementations, allowing developers to build complex, multi-surface integrations with minimal overhead. This architecture dramatically reduces time-to-integration, supports scalable extensibility, and aligns with modern expectations for one-click deployments and dynamic orchestration. SaaS value is shifting from standalone features to ecosystem interoperability. The next generation of platforms will be defined by how well they connect.
-
Sustainable productivity gains come from solutions that scale without adding complexity - and DHL Supply Chain shows what that looks like in practice. By adopting our picking solution Frontline, they’ve enabled a 100% hands-free, vision-guided logistics process that led to a 15% increase in productivity, error rates dropping to just 0.1%, and onboarding time cut by up to 70%. Vision picking integrates seamlessly into DHL’s day-to-day warehouse operations. With over 1,500 employees using it across 25+ US sites, it’s proving that when digital tools are intuitive, accessible, and scalable, they drive real business outcomes - faster fulfillment, lower costs, and a more satisfied workforce. This is a perfect example of how we’re shaping the digital workplace for frontline teams, enabling them to work smarter and more efficiently. I’m sharing a short video below that brings this transformation to life. It’s a great illustration of how technology, when designed with people in mind, can create a lasting competitive advantage. #DHL #TeamViewer #VisionPicking #Logistics TeamViewer
-
When most people hear automation, they think of expensive robots, complex integrations, and big enterprise budgets. It feels out of reach for smaller businesses. But it doesn’t have to be. Research from MIT Sloan School of Management shows that SMEs can approach automation differently, and affordably, without losing sight of competitiveness. One way is to start with peripheral processes. These aren’t the core operations that need high reliability, but the supporting activities that often drain time and attention. Think QR codes to monitor container unloading, sensors to track equipment health, or simple smartphone apps to give real-time visibility. Small steps like these can improve efficiency by 10–15% with minimal cost. Another way is to use stand-alone solutions. These don’t require deep IT integration, which makes them easier to deploy and scale. AI chatbots, IoT sensors, or plug-and-play analytics tools can be rolled out gradually, growing with the business rather than demanding heavy upfront investment. The lesson is simple: automation costs spiral when you chase customisation, tight integration, or unnecessary reliability. Costs come down when you design for compatibility, modularity, and just-enough functionality. For SMEs, the challenge isn’t whether automation is possible, but it’s learning where to start, and being smart about how far to go. Start small, scale at your own pace, and you’ll find automation doesn’t have to break the bank. #Automation #DigitalTransformation #SMEs #Innovation
-
Focusing on AI’s hype might cost your company millions… (Here’s what you’re overlooking) Every week, new AI tools grab attention—whether it’s copilot assistants or image generators. While helpful, these often overshadow the true economic driver for most companies: AI automation. AI automation uses LLM-powered solutions to handle tedious, knowledge-rich back-office tasks that drain resources. It may not be as eye-catching as image or video generation, but it’s where real enterprise value will be created in the near term. Consider ChatGPT: at its core, there is a large language model (LLM) like GPT-3 or GPT-4, designed to be a helpful assistant. However, these same models can be fine-tuned to perform a variety of tasks, from translating text to routing emails, extracting data, and more. The key is their versatility. By leveraging custom LLMs for complex automations, you unlock possibilities that weren’t possible before. Tasks like looking up information, routing data, extracting insights, and answering basic questions can all be automated using LLMs, freeing up employees and generating ROI on your GenAI investment. Starting with internal process automation is a smart way to build AI capabilities, resolve issues, and track ROI before external deployment. As infrastructure becomes easier to manage and costs decrease, the potential for AI automation continues to grow. For business leaders, identifying bottlenecks that are tedious for employees and prone to errors is the first step. Then, apply LLMs and AI solutions to streamline these operations. Remember, LLMs go beyond text—they can be used in voice, image recognition, and more. For example, Ushur is using LLMs to extract information from medical documents and feed it into backend systems efficiently—a task that was historically difficult for traditional AI systems. (Link in comments) In closing, while flashy AI demos capture attention, real productivity gains come from automating tedious tasks. This is a straightforward way to see returns on your GenAI investment and justify it to your executive team.
-
🚀 Excited to share my latest project: a fully autonomous Smart Warehouse Management System built using the Agent Communication Protocol (ACP)! This innovative system features four intelligent agents InventoryBot, OrderProcessor, LogisticsBot, and WarehouseManager working seamlessly together to manage stock, schedule deliveries, and handle reorders, all through standardized, real-time communication. 🌟 What is ACP? ACP is a framework that enables autonomous agents to communicate effectively using structured messages with defined performatives (e.g., ASK, REQUEST_ACTION, TELL, CONFIRM). It ensures clear, reliable interactions, making it ideal for complex systems like smart warehouses where coordination is key. 🌟 How It Works: Scenario 1: Stock Alert & Reorder - The OrderProcessor checks stock levels with InventoryBot and triggers reorders to maintain minimum availability (e.g., reordering to fill low laptop stock). Scenario 2: Delivery Scheduling - The WarehouseManager directs LogisticsBot to schedule deliveries of goods, with LogisticsBot confirming the schedule including a tracking ID for transparency. Scenario 3: Low Stock Management - InventoryBot alerts the WarehouseManager of low stock (e.g., 5 tablets), prompting a confirmation that 15 tablets are needed; the WarehouseManager then requests OrderProcessor to place an order for 15 tablets, with OrderProcessor confirming via a PO number. The interactive frontend visualizes these interactions, complete with a Statistics dashboard (e.g., total messages: 6, active conversations: 3, registered agents: 4) to monitor performance, making it perfect for real-world adoption. 🏭Impact on Logistics: This solution transforms the logistics industry by reducing manual oversight, optimizing stock levels, and streamlining delivery schedules. With real-time data and automated reordering, warehouses can operate 24/7, cut costs, and improve customer satisfaction key drivers in today’s fast-paced supply chain. This showcase how AI and ACP can revolutionize warehouse management. Check out the demo video to see it in action!
-
Automation is more than just clicking a button While automation tools can simulate human actions, they don't possess human instincts to react to various situations. Understanding the limitations of automation is crucial to avoid blaming the tool for our own scripting shortcomings. 📌 Encountering Unexpected Errors: Automation tools cannot handle scenarios like intuitively handling error messages or auto-resuming test cases after failure. Testers must investigate execution reports, refer to screenshots or logs, and provide precise instructions to handle unexpected errors effectively. 📌 Test Data Management: Automation testing relies heavily on test data. Ensuring the availability and accuracy of test data is vital for reliable testing. Testers must consider how the automation script interacts with the test data, whether it retrieves data from databases, files, or APIs. Additionally, generating test data dynamically can enhance test coverage and provide realistic scenarios. 📌 Dynamic Elements and Timing: Web applications often contain dynamic elements that change over time, such as advertisements or real-time data. Testers need to use techniques like dynamic locators or wait to handle these dynamic elements effectively. Timing issues, such as synchronization problems between application responses and script execution, can also impact test results and require careful consideration. 📌 Maintenance and Adaptability: Automation scripts need regular maintenance to stay up-to-date with application changes. As the application evolves, UI elements, workflows, or data structures might change, causing scripts to fail. Testers should establish a process for script maintenance and ensure scripts are adaptable to accommodate future changes. 📌 Test Coverage and Risk Assessment: Automation testing should not aim for 100% test coverage in all scenarios. Testers should perform risk assessments and prioritize critical functionalities or high-risk areas for automation. Balancing automation and manual testing is crucial for achieving comprehensive test coverage. 📌 Test Environment Replication: Replicating the test environment ensures that the automation scripts run accurately and produce reliable results. Testers should pay attention to factors such as hardware, software versions, configurations, and network conditions to create a robust and representative test environment. 📌 Continuous Integration and Continuous Testing: Integrating automation testing into a continuous integration and continuous delivery (CI/CD) pipeline can accelerate the software development lifecycle. Automation scripts can be triggered automatically after each code commit, providing faster feedback on the application's stability and quality. Let's go beyond just clicking a button and embrace automation testing as a strategic tool for software quality and efficiency. #automationtesting #automation #testautomation #softwaredevelopment #softwaretesting #softwareengineering #testing
-
𝗘𝘃𝗲𝗿 𝗳𝗲𝗹𝘁 𝗹𝗶𝗸𝗲 𝘆𝗼𝘂𝗿 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗽𝗮𝘁𝗵𝘄𝗮𝘆 𝗶𝘀 𝗮 𝗯𝗶𝘁 𝘁𝗲𝗻𝘂𝗼𝘂𝘀? If so, you're not alone. When it comes to bringing a medical device to market, the journey can feel anything but straightforward. Here are some actionable steps to make your regulatory path less tenuous and more secure: 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝘁𝗵𝗲 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗟𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲 → Different regions have different requirements. → For instance, the FDA in the U.S. and the MHRA in the UK have unique criteria. → Knowing the specifics can save you from surprises later on. 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗞𝗲𝘆 → Maintain thorough and organised documentation. → This includes everything from design history files to risk management reports. → Trust me, when an auditor/inspector comes knocking, you'll be thankful for your meticulous records. 𝗞𝗻𝗼𝘄 𝗬𝗼𝘂𝗿 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 → Misclassifying your device can lead to major setbacks. → Ensure you understand whether your device falls under Class I, II, or III. → This will dictate the level of regulatory scrutiny your product will face. 𝗘𝗻𝗴𝗮𝗴𝗲 𝗘𝗮𝗿𝗹𝘆 𝘄𝗶𝘁𝗵 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗕𝗼𝗱𝗶𝗲𝘀 → Don't wait until the last minute to interact with regulatory authorities. → Early engagement can provide critical insights and help you avoid common pitfalls. → For example, presubmission meetings with the FDA can be invaluable. 𝗦𝘁𝗮𝘆 𝗨𝗽𝗱𝗮𝘁𝗲𝗱 𝗼𝗻 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻𝘀 → Regulatory standards are constantly evolving. → Subscribe to industry newsletters and join relevant forums. → Being proactive can often mean the difference between compliance and costly delays. 𝗜𝗻𝘃𝗲𝘀𝘁 𝗶𝗻 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 (𝗤𝗠𝗦) → A robust QMS is not just a regulatory requirement; it’s a business asset. → Implementing standards like ISO 13485 can streamline your processes and improve product quality. 𝗛𝗶𝗿𝗲 𝗼𝗿 𝗖𝗼𝗻𝘀𝘂𝗹𝘁 𝘄𝗶𝘁𝗵 𝗘𝘅𝗽𝗲𝗿𝘁𝘀 → Don’t hesitate to bring in external expertise. → Regulatory consultants can provide specialised knowledge and help navigate complex requirements. → This can be particularly useful for SMEs with limited in-house resources. Conduct Thorough Testing and Validation → Ensure that all necessary tests are conducted and well documented. → This includes biocompatibility, electrical safety, and performance testing. → Proper validation can prevent last minute hitches during the approval process. Plan for PostMarket Surveillance → Regulatory compliance doesn’t end at market entry. Remember, the regulatory journey might seem tenuous, but with the right approach, you can navigate it successfully.
-
In a recent roundtable with fellow CXOs, a recurring theme emerged: the staggering costs associated with artificial intelligence (AI) implementation. While AI promises transformative benefits, many organizations find themselves grappling with unexpectedly high Total Cost of Ownership (TCO). Businesses are seeking innovative ways to optimize AI spending without compromising performance. Two pain points stood out in our discussion: module customization and production-readiness costs. AI isn't just about implementation; it's about sustainable integration. The real challenge lies in making AI cost-effective throughout its lifecycle. The real value of AI is not in the model, but in the data and infrastructure that supports it. As AI becomes increasingly essential for competitive advantage, how can businesses optimize costs to make it more accessible? Strategies for AI Cost Optimization 1.Efficient Customization - Leverage low-code/no-code platforms can reduce development time - Utilize pre-trained models and transfer learning to cut down on customization needs 2. Streamlined Production Deployment - Implement MLOps practices for faster time-to-market for AI projects - Adopt containerization and orchestration tools to improve resource utilization 3. Cloud Cost Management -Use spot instances and auto-scaling to reduce cloud costs for non-critical workloads. - Leverage reserved instances For predictable, long-term usage. These savings can reach good dollars compared to on-demand pricing. 4.Hardware Optimization - Implement edge computing to reduce data transfer costs - Invest in specialized AI chips that can offer better performance per watt compared to general-purpose processors. 5.Software Efficiency - Right LLMS for all queries rather than single big LLM is being tried by many - Apply model compression techniques such as Pruning and quantization that can reduce model size without significant accuracy loss. - Adopt efficient training algorithms Techniques like mixed precision training to speed up the process -By streamlining repetitive tasks, organizations can reallocate resources to more strategic initiatives 6.Data Optimization - Focus on data quality since it can reduce training iterations - Utilize synthetic data to supplement expensive real-world data, potentially cutting data acquisition costs. In conclusion, embracing AI-driven strategies for cost optimization is not just a trend; it is a necessity for organizations looking to thrive in today's competitive landscape. By leveraging AI, businesses can not only optimize their costs but also enhance their operational efficiency, paving the way for sustainable growth. What other AI cost optimization strategies have you found effective? Share your insights below! #MachineLearning #DataScience #CostEfficiency #Business #Technology #Innovation #ganitinc #AIOptimization #CostEfficiency #EnterpriseAI #TechInnovation #AITCO
-
🚀 Building Observable Infrastructure: Why Automation + Instrumentation = Production Excellence and Customer Success After building our platform's infrastructure and application automation pipeline, I wanted to share why combining Infrastructure as Code with deep observability isn't optional—it's foundational as shown in screenshots implemented on Google Cloud. The Challenge: Manual infrastructure provisioning and application onboarding creates consistency gaps, slow deployments, and zero visibility into what's actually happening in production. When something breaks at 3 AM, you're debugging blind. The Solution: Modular Terraform + OpenTelemetry from Day One with our approach centered on three principles: 1️⃣ Modular, Well architected Terraform modules as reusable building blocks. Each service (Argo CD, Rollouts, Sonar, Tempo) gets its own module. This means: 1. Consistent deployment patterns across environments 2. Version-controlled infrastructure state 3. Self-service onboarding for dev teams 2️⃣ OpenTelemetry Instrumentation of every application during onboarding as a minimum specification. This allows capturing: 1. Distributed traces across our apps / services / nodes (Graph) 2. Golden signals (latency, traffic, errors, saturation) 3. Custom business metrics that matter. 3️⃣ Single Pane of Glass Observability Our Grafana dashboards aggregate everything: service health, trace data, build pipelines, resource utilization. When an alert fires, we have context immediately—not 50 tabs of different tools. Real Impact: → Application onboarding dropped from days to hours → Mean time to resolution decreased by 60%+ (actual trace data > guessing) → nfrastructure drift: eliminated through automated state management → Dev teams can self-service without waiting on platform engineering Key Learnings: → Modular Terraform requires discipline up front but pays dividends at scale. → OpenTelemetry context propagation consistent across your stack. → Dashboards should tell a story by organising by user journey. → Automation without observability is just faster failure. You need both. The Technical Stack: → Terraform for infrastructure provisioning → ArgoCD for GitOps-based deployments → OpenTelemetry for distributed tracing and metrics → Tempo for trace storage → Grafana for unified visualisation The screenshot shows our command center : → Active services → Full trace visibility → Automated deployments with comprehensive health monitoring. Bottom line: Modern platform engineering isn't about choosing between automation OR observability. It's about building systems where both are inherent to the architecture. When infrastructure is code and telemetry is built-in, you get reliability, velocity, and visibility in one package. Curious how others are approaching this? What's your observability strategy look like in automated environments? #DevOps #PlatformEngineering #Observability #InfrastructureAsCode #OpenTelemetry #SRE #CloudNative
-
+7