The recent regulatory guidelines, viz RBI Master Directions of Nov 2023 and SEBI Cybersecurity and Cyber Resilience Framework (CSCRF) of Aug 2024 lay added importance to cyber resilience, business continuity and disaster recovery, incident response and recovery from cyber incidents. Boards are being increasingly attentive and seeking deeper insights on the organizations' preparedness to respond to and recover from cyber incidents. Being part of the Boards of regulated entities, I saw this quarter's IT Strategy and Technology Committee meetings, as well as the Board meetings delve deep and enquiring with the security and technology leadership and sometimes, directly from the MD/CEO, on : 1. Cyber incidents reported, their impact and root-cause assessments. Note : for the organizations, these were mostly hits or false positives. 2. Resilience scores, with Q-o-Q and Y-o-Y comparatives 3. Business Continuity Drills and results 4. Disaster Recovery exercises and results 5. Health check report on the primary as well as the recovery sites, including cloud DR assessments 6. Cyber / technology risk assessments 7. Compliance and reporting (technology) 8. Ongoing governance and improvement around the Cyber Crisis Management Plan (or similar plan, by whatever nomenclature it's defined) 9. Adequacy of technology & security resourcing and training 10. Data protection, with special emphasis on vendor / third party access to critical data & resources and controls around the same The above were some of the top discussion points, but not the only ones. As Boards are made more and more involved and responsible over governance of the organizations' cyber security, resilience, technology governance and risk assurance, Board members will engage more regularly on discussions about cyber risks, inquire of the management their capacity-capability-readiness to respond to and recover effectively from cyber incidents. And above all, the Board would like to ensure compliance to all the relevant regulatory provisions, including on technology and #cybersecurity. To all Technology and Security leaders - the message is very clear, the regulators and the Boards would like to see much more than mere tick mark exercise, specially if you're a regulated entity. - read through each clause in the directions & circulars from regulators - assess thoroughly your current status, including process, operations, technology architecture, procedures, documentation et all - perform risk assessment - technology and operations, over each part of your business - conduct data flow analysis, ascertain your data protection strategy - analyze your third party / vendor connections at all business touchpoints Once you analyze your current state, compare with the requirements given by regulatory directions. Then, step-by-step, put in the measures, updates, upgrades. These are critical steps and require expert acumen - take help from external experts, as required. #technologygovernance
Project Management Governance Models
Explore top LinkedIn content from expert professionals.
-
-
The real challenge is not scaling AI agents, it is scaling Governance! As organizations shift from deploying AI as isolated tools to orchestrating multi-agent systems, governance must evolve with it. It’s no longer just about minimizing harm—it’s about enabling responsible autonomy at scale. This is where the Responsible Autonomy Framework (RAF) comes in. 🧭 On the left: Why we govern - Accountability - Transparency & Explainability - Ethical Alignment - Security & Resilience ⚙️ On the right: What we must govern as autonomy grows - Autonomy Control - Interaction & Coordination - Adaptability & Evolution - Interoperability Each pairing demands new or uplifted capabilities—but here’s the key: governance isn’t one-size-fits-all. It depends on your organization’s AI maturity level. Below are just a few examples to illustrate how agentic AI governance capabilities shift as maturity increases: 🔹 Level 1 – Adhoc use of AI tools Begins to lay the groundwork for responsible and ethical scale: - Ownership structures - Logging and audit trails - Data management policies 🔹 Level 2 – Repeatable use of AI Tools AI begins supporting human workflows. Examples of what Governance must now address include: - Human-in-the-loop safeguards - Explainability dashboards - Responsibility mapping for augmented decisions 🔹 Level 3 – Management of AI Agents. AI starts to take action. This demands governance mechanisms such as: - Autonomy control matrices (who decides what) - Interaction design policies for human-agent and agent-agent coordination - Resilience testing for unpredictable scenarios 🔹 Level 4 – Governance of Mult-Agent Systems AI shapes business outcomes and adapts strategies. Governance needs to catch up: - Ethical scenario simulation tools - Behavioral monitoring agents - Cross-system interoperability standards 🔹 Level 5 – Autonomous Force (Speculative) Here, governance isn’t just about rules—it’s about readiness: - Can your controls evolve as fast as your AI? - Are you governing at the ecosystem level? - Are you building for explainability in unknown contexts? 👉 These are not complete lists—they’re signals of the kinds of capability shifts that must occur across maturity levels. Every step up the maturity curve amplifies both opportunity and risk. The takeaway? AI governance isn’t a compliance checkbox. It’s an evolving capability in its own right—a leadership function that determines whether your AI empowers or entangles. It is a challenge that spans mindset, culture, processes, structure, and methodology. I think the right foundation will be more critical than ever. And I think only Architects can define it. What do you think? Where on the AI governance journey are you?
-
𝗔𝗿𝗲 𝘆𝗼𝘂 𝗽𝗿𝗼𝗮𝗰𝘁𝗶𝘃𝗲𝗹𝘆 𝗺𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗦𝗼𝘂𝗿𝗰𝗲-𝘁𝗼-𝗣𝗮𝘆 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 𝗰𝗼𝘀𝘁𝘀? If not, why let savings from smart Procurement slip away due to outdated technology or suboptimal use? S2P technology plays a central role in cost management, yet many companies lack a strategic approach to continuously assess and optimise their tech stack. Companies can adopt Bain & Co’s "𝗥𝗲𝗱𝘂𝗰𝗲, 𝗥𝗲𝗽𝗹𝗮𝗰𝗲, 𝗮𝗻𝗱 𝗥𝗲𝘁𝗵𝗶𝗻𝗸" model to continuously evaluate their technology infrastructure and costs, ensuring a more optimised and sustainable cost profile. Here is the model in action for Source to Pay technology cost optimisation: ▪️ 𝗥𝗲𝗱𝘂𝗰𝗲 to recover 10 to 20% of costs through short-term actions such as - adjusting licenses to match actual usage and adoption patterns - discontinuing features or functionalities that add little value - switching off modules where business capabilities have not yet caught up Avoid over-licensing by matching user access to actual needs, ensuring modules align with Procurement’s readiness. ▪️ 𝗥𝗲𝗽𝗹𝗮𝗰𝗲 to yield 20 to 30% of savings by - transitioning to cost-optimal, flexible solutions and getting out of lock-ins - switching subscription models when premium offerings are unnecessary - consolidating overlapping tools that offer similar features For example, merge multiple eSourcing tools into a primary platform and adopt a tender-based pricing for niche auction needs. This helps to adjust the cost profile of your Source to Pay technology with the actual needs. ▪️ 𝗥𝗲𝘁𝗵𝗶𝗻𝗸 to realise up to 40% cost optimisation by: - reimagining the architecture with a modular, composable design - automating and orchestrating processes and integrating new digital tools - reevaluate the mix of best-of-breed solutions vs integrated suites A new Procurement strategy requires a fresh look at the S2P tech stack to ensure it adapts and supports growth cost-effectively, while offering flexibility through additional digital levers like AI and automation. 𝗢𝗽𝘁𝗶𝗺𝗶𝘀𝗶𝗻𝗴 𝗦𝟮𝗣 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 𝗶𝘀 𝗮 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗷𝗼𝘂𝗿𝗻𝗲𝘆, 𝗻𝗼𝘁 𝗮 𝗼𝗻𝗲-𝘁𝗶𝗺𝗲 𝗲𝗳𝗳𝗼𝗿𝘁, especially with contractual commitments, sunk costs, and change management challenges. Rather than following IT preferences and standards, it’s about keeping technology fresh and aligned with business needs as they evolve. ❓How do you manage your S2P technology to adapt to changing business needs while maintaining cost efficiency.
-
🌱Are you strategic by being sustainable as a project professional? Being sustainable goes beyond environmental impact. Sustainable habits and strategies are crucial for project professionals looking to create long-term social and economic impact. Here are some key practices you can leverage: 𝐓𝐫𝐢𝐩𝐥𝐞 𝐁𝐨𝐭𝐭𝐨𝐦 𝐋𝐢𝐧𝐞 (𝐓𝐁𝐋) 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: first introduced by John Elkington in 1994, this is widely used in sustainability and corporate social responsibility contexts. Project professionals should adopt the TBL framework which considers social, environmental, and economic impacts when evaluating project success. This approach ensures that projects benefit not just as it relates to the organization's bottom line but also the society and environment. 𝐒𝐭𝐚𝐤𝐞𝐡𝐨𝐥𝐝𝐞𝐫 𝐄𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭: Engaging with all stakeholders, including local communities, NGOs, government bodies, and businesses, is absolutely crucial. Understanding their needs, concerns, and aspirations helps in designing projects that align with their interests, increasing the chances of long-term success and support. 𝐋𝐨𝐧𝐠-𝐓𝐞𝐫𝐦 𝐕𝐢𝐬𝐢𝐨𝐧 𝐚𝐧𝐝 𝐏𝐥𝐚𝐧𝐧𝐢𝐧𝐠: Sustainable projects require a long-term vision and planning. Instead of focusing solely on short-term gains, project professionals must consider the long-term implications of their actions on the organization, environment, society et. al. 𝐑𝐞𝐬𝐨𝐮𝐫𝐜𝐞 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲: Sustainable projects should prioritize resource (human, material or capital) efficiency. This includes reducing waste, proper work planning and utilization, optimizing energy and water consumption, and using renewable or recyclable materials whenever possible. 𝐒𝐨𝐜𝐢𝐚𝐥 𝐈𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧 𝐚𝐧𝐝 𝐃𝐢𝐯𝐞𝐫𝐬𝐢𝐭𝐲: Ensure that the project promotes social inclusion and diversity, empowering marginalized communities and ensuring equal opportunities for all. This can lead to more resilient and equitable societies. Join Ahad Nazir and I on @strategic project leader podcast as we discuss sustainability becoming mainstream in the practice of project management. Join us live and be part of the conversation using the link in the comment. #strategicthinking #projectmanagement #sustainability #strategicprojectmanagement
-
Interested in a career in #AIgovernance but don’t know where to start? You’re not alone. Three people reached out last week looking for advice (and lots more have over time). In my opinion, it starts with building a demonstrable foundation of knowledge. And the good news is that high-quality, low-cost training is readily available. Here’s a list of the best courses under $100 from leading companies and schools to get you started.👇 (Tomorrow I’ll share a list of non-profit organizations that you can volunteer with to start putting your AI knowledge into action.) Google 🔗 Introduction to Responsible AI: https://lnkd.in/ebDADGKK (1 hr) 🔗 Responsible AI – Applying AI Principles: https://lnkd.in/ex-wmeqX (1 hr) 🔗 Responsible AI for Developers: https://lnkd.in/eMucsQRu (11 hrs) Microsoft 🔗 Ethical & Regulatory Implications of GenAI: https://lnkd.in/e8NuxKF8 (3 hrs) IBM 🔗 Impact, Ethics & Issues with GenAI: https://lnkd.in/eHegg55E (3 hrs) University of Helsinki 🔗 Ethics of AI: https://lnkd.in/ejvu4bdv (2 hrs) Michigan 🔗 GenAI: Governance, Policy & Emerging Regulation: https://lnkd.in/eZnx7zPH (2 hrs) Penn 🔗 AI Strategy & Governance: https://lnkd.in/eyp_PtxU (9 hrs) Northeastern 🔗 Ethics & Governance in the Age of GenAI: https://lnkd.in/eN7D8eBY (23 hrs) LinkedIn Learning 🔗 Foundations of Responsible AI: https://lnkd.in/eXcFpa_w (2.5 hrs) 🔗 Building a Responsible AI Program: https://lnkd.in/eEN_MhiH (1.5 hrs) 🔗 Understanding & Implementing the NIST AI Risk Management Framework: https://lnkd.in/eEZUu6Xp (1.5 hrs) 🔗 Navigating the EU AI Act: https://lnkd.in/eu7YSQKd (1.5 hrs) 🔗 The State of AI & Copyright: https://lnkd.in/epNyhAHJ (1 hr) 🔗 Introduction to Auditing AI Systems: https://lnkd.in/ePX8XCVP (1.5 hrs) 🔗 Algorithmic Auditing & Continuous Monitoring: https://lnkd.in/eCwaYuyk (1 hr) 🔗 Managing AI Security Risks with ISO 27001: https://lnkd.in/ezkBdiYd (1.5 hrs)
-
When Self-Interest Overrides Cultural Integrity: A First Nations Perspective on Board Conflicts By Nicole Brown On Day Two of the AICD course, the topic of failing to act in good faith sparked necessary reflection — especially when applied to the realities of First Nations governance. The reminder that “boards decide if conflicts exist” and that directors must “disclose any material interests” becomes even more complex in our communities, where kinship, culture, and politics are deeply interwoven. Let’s be clear: perception matters. In fact, in First Nations communities, perception can be just as powerful as the facts. Even when a director believes they’re acting appropriately, if their actions are perceived as self-serving or exclusive, trust can be lost in an instant. And in small communities where decisions echo loudly, perception is reality. In a good light, perception can uphold integrity — when a board is transparent, inclusive, and actively declares conflicts, it builds confidence. When mob can see that decisions are being made fairly, it fosters cultural safety and strengthens the legitimacy of the leadership. This is the power of perception used well: reinforcing accountability through visible action. But in a bad light, perception can destroy credibility. If a board refuses to acknowledge or record conflicts of interest — or worse, doesn’t even have a conflict of interest register — it gives the impression of secrecy and favouritism. Directors may think they’re just “helping out family,” but when they influence decisions that benefit their personal networks, the perception is one of corruption, even if it’s not illegal. That damage is long-lasting. Let’s not forget: people naturally look after their own self-interest. But governance isn’t about instincts — it’s about discipline. It’s about putting the interests of the whole community above individual or family gain. It’s about doing the right thing, even when no one is watching — and especially when everyone is. That’s why boards must go beyond compliance and foster a culture of transparency. That means: ☑️ Actively maintaining a living conflict of interest register ☑️ Discussing perceived conflicts, not just actual ones ☑️ Creating space for culturally safe disclosures ☑️ Recognising that perception alone can undermine the board’s credibility In First Nations governance, acting in good faith is about more than rules. It’s about relationship, responsibility, and respect. Perception, when managed with integrity, can be a powerful ally — but when ignored, it becomes a quiet storm that erodes the very foundations we stand on.
-
Reinvigorated (and exhausted!) after two intense days of learning at the IAPP - International Association of Privacy Professionals' AI Governance Professional (#AIGP) Training. Here are some of my main takeaways: 1️⃣ In order to help an organization govern #AI effectively, you don’t need to be a data scientist, but you do need to understand basic AI terminology and concepts. 2️⃣ AI is not an end in itself. To facilitate effective AI governance, the responsibility of AI governance professionals is also to question whether and what AI is the appropriate solution to the business problem at hand. 3️⃣ At the same time, AI governance professionals should have a pro-innovation mindset. There are many benefits to the responsible use of AI and AI governance professionals should help their organizations advance. 4️⃣ #Privacy professionals are well suited to become AI governance professionals. Core data protection concepts such as purpose limitation, data quality, and data minimization continue to apply. At the same time, they should have basic familiarity with other legal obligations, e.g., in the areas of copyright, product safety, and product liability. 5️⃣ Relatedly, AI will never be deployed in a vacuum. One of the biggest challenges I anticipate with respect to the governance of AI is how to effectively embed AI governance within existing structures, e.g., the product development lifecycle, data protection impact assessments, quality management, and risk assessments in accordance with international standards and certifications, etc. 6️⃣ More generally, AI governance professionals need to be able to go both broad and deep. Broad, in the sense that they need to understand existing laws and regulations, and how they apply to the development and deployment of AI; deep, in the sense that they need to understand the risks and opportunities that attach to the use of AI in their specific sectors. 7️⃣ Finally, AI governance professionals need to foster a company culture in which the development of trustworthy, human-centric, and accountable AI is considered key to business success. This includes identifying advocates at the leadership level, but also evangelists within each relevant department. The time to familiarize oneself with the basic tenets of AI governance is now. The #IAPP #AIGP Training is a great way to get started. Special shoutout to our excellent instructor Oliver Patel, CIPP/E for facilitating. Grateful to everyone who contributed to the detailed participant guide too, which enabled me to independently work through the material of the sessions I could not attend in person. AI governance requires ongoing learning and development, particularly as the regulatory environment continues to evolve. I look forward to working together with friends and colleagues in the industry to further track these developments and translate them into everyday organizational practice.
-
Firing middle managers won't accelerate decisions. The bottleneck just moves up. The middle-management culling continues. The promise: fewer layers means faster data and quicker decisions. Yet most organizations repeat the same mistake. When every meaningful decision still needs approval from the same five executives, you haven't solved anything. You've just hit the bottleneck faster. We've been here before: → ERP systems would revolutionize decision-making → Big data would unlock instant insights → Digital transformation would make us agile Now it's AI and flat hierarchies. Same promise, different wrapper. LegacyCo's governance trap isn't about having too many managers. It's about concentrating judgment at the top while expecting speed at the edges. "Have we pressure-tested this fully?" "What's our governance for downside risk?" "We need stronger stakeholder alignment." This isn't prudence. It's paralysis dressed as process. While others added approval layers, Ritz-Carlton gave frontline staff $2,000 discretionary authority. Decision time: days to minutes. Customer satisfaction: soared. The difference wasn't fewer managers. It was judgment distributed to where information lives. NewCo architects judgment into the system itself. Two roles make this possible: Forward Deployed Engineers (FDE): Technical talent with deployment authority. They see the problem, they fix it. No tickets, no committees. Operational Technologists (OpTech): Business experts who implement their own solutions. The person who knows the process can now improve the process. One brings code. One brings context. Both exercise judgment at market speed. An important distinction to make: distributed judgment without guardrails creates chaos, not speed. NewCo architects trust into the system: → Define clear decision boundaries upfront → Give teams authority within those boundaries → Treat every choice as an experiment → Measure outcomes in real-time, not quarterly → Escalate by exception, not default This is orchestrated judgment - wisdom scaled through systems, not hierarchies. To scale judgment means developing wisdom across the organization, not hoarding it at the top. This requires: → Clarity: Teams who understand impact, not just metrics → Discernment: Knowing which battles matter → Taste: Recognizing quality without committees → Connection: Building trust that enables autonomy Juniors tackle harder problems sooner. Teams develop judgment through practice, not observation. LegacyCo: "Check with me before you move" NewCo: "Move within these boundaries" One question leads to faster bottlenecks. The other leads to market-speed execution. The winners won't have the flattest org charts. They'll have the most distributed judgment. The question isn't how many managers to fire. It's how much judgment you're willing to trust others with.
-
Some of the best conversations in our team don’t happen in a boardroom; they happen in airports, coffee shops, or right after a client meeting. At Youniq Minds, we don’t sit under one roof. Our team lives in different cities, coming together in person only when a client assignment calls us. And yet, every time we meet, it feels like picking up from where we left off, as though distance never stood in the way. That’s the gift and the challenge of leading virtual teams. Flexibility and diversity of thought come naturally. But so do hurdles: miscommunication, different working styles, the absence of casual watercooler moments, and the silent risk of burnout. Over time, we’ve learned that the glue isn’t just processes or tools. It’s intentional leadership. The Center for Creative Leadership offers some powerful best practices that we often apply with our clients: - Define the team’s purpose and align on vision. - Clarify roles and expectations. - Establish clear procedures and working norms. - Invest in trust, celebrate small wins, encourage input, and stay connected. - Recognize differences: cultural, generational, and experiential. For us, one of the most powerful practices has been bringing in a coach to facilitate conversations. Those moments surface the unspoken, strengthen alignment, and turn distance into connection. Because leading virtually isn’t just about managing tasks, it’s about managing distance, diversity, and differences. Done with care, virtual teams don’t just work, they thrive. They become engines of trust and innovation. This picture is a reminder that distance doesn’t limit collaboration, but it does require leaders to be intentional. What about you? What’s one practice that has helped you thrive in a virtual team? #YouniqMinds #VirtualTeam #VirtualLeadership #TeamCulture #LeadershipDevelopment #RemoteTeams #CoachingForLeaders #TrustInTeams #Coaching #LDPerspectives
-
Cloud costs are becoming the blind spot in digital transformation. A huge mistake is thinking cost control comes after deployment. Gartner, IDC, and regional surveys show the same thing: Cloud adoption is scaling, and so is waste. It raises hard questions for every delivery lead: How do we track value, not just spend? How do we forecast with accuracy? How do we stay cost-resilient across regions? It’s not about the cloud provider. It’s about the discipline behind it. And the reality: 94% of global organisations report cost overruns. Most common culprits? Idle compute. Unused storage. No tagging. No shutdown policies. Here’s why it keeps happening: → No unit cost ownership → No spend visibility at the service level → No roadmap alignment These aren’t random misses. They’re signs of a systemic problem: → Engineering owns infra ≫ not budgets → Finance owns totals ≫ not workloads → PMOs track milestones ≫ not consumption That’s why we use tools like: ⓘ AWS Cost Explorer to track EC2, S3, and Lambda usage ⓘ Azure Cost Management for daily anomaly alerts ⓘ GCP Billing for service-level granularity ⓘ CloudZero, Ternary, and nOps to push unit cost per job or user One UAE fintech cut idle compute by 37% in Q2 by tagging early, automating shutdowns, and publishing per-team cost scorecards. Cloud isn’t expensive. Lack of ownership is. الرؤية تسبق الوفورات. Savings follow visibility.