𝐖𝐡𝐞𝐧 𝐁𝐨𝐚𝐫𝐝𝐬 𝐅𝐚𝐜𝐞 𝐭𝐡𝐞 𝐌𝐢𝐫𝐫𝐨𝐫 ................ 𝘌𝘵𝘩𝘪𝘤𝘴, 𝘖𝘷𝘦𝘳𝘴𝘪𝘨𝘩𝘵, 𝘢𝘯𝘥 𝘈𝘤𝘤𝘰𝘶𝘯𝘵𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝑩𝒂𝒄𝒌𝒈𝒓𝒐𝒖𝒏𝒅 𝒐𝒇 𝒕𝒉𝒆 𝑪𝒂𝒔𝒆 Rajendra Lodha, a director at Lodha Developers, has stepped down while his conduct remains under review. This development has drawn attention to governance practices in India’s real estate sector. It is also a reminder of how boards, ethics committees, and independent directors play a critical role in ensuring accountability. 𝑹𝒐𝒍𝒆 𝒐𝒇 𝑬𝒕𝒉𝒊𝒄𝒔 𝑪𝒐𝒎𝒎𝒊𝒕𝒕𝒆𝒆𝒔 Ethics committees act as watchdogs of integrity. They investigate allegations of misconduct, ensure transparency, and protect stakeholder trust. Their role goes beyond compliance. They are tasked with upholding values that shape corporate culture. Without strong ethics oversight, organizations risk reputational and financial damage. 𝑳𝒆𝒈𝒂𝒍 𝑷𝒓𝒐𝒗𝒊𝒔𝒊𝒐𝒏𝒔 Under the Companies Act, 2013, directors are bound by fiduciary duties and disclosure requirements. Independent directors are required to safeguard minority shareholders, oversee related-party transactions, and ensure decisions align with the company’s ethical standards. SEBI (LODR) guidelines also strengthen the monitoring of board conduct, ensuring that corporate governance frameworks are not just symbolic. 𝑹𝒐𝒍𝒆 𝒐𝒇 𝑰𝒏𝒅𝒆𝒑𝒆𝒏𝒅𝒆𝒏𝒕 𝑫𝒊𝒓𝒆𝒄𝒕𝒐𝒓𝒔 Independent directors serve as neutral voices on the board. They question management decisions, review risk exposure, and hold leadership accountable. In situations like the Lodha case, their presence becomes vital for fair and unbiased assessment. They bridge the gap between shareholders and management, reinforcing ethical decision-making. 𝑺𝒖𝒎𝒎𝒂𝒓𝒚 𝒐𝒇 𝑹𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆𝒔 Companies Act, 2013 SEBI LODR Regulation Case of Lodha Developers
Institutional Review Processes
Explore top LinkedIn content from expert professionals.
-
-
In my academic journey, one of the most common questions I encounter is: What makes a research grant fundable? 🤔 Let's consider this scenario: 📌 Grant Application A: The project idea is promising, but the goals are vague and disconnected from the research question. Methods are unclear, impact is hard to gauge, and the proposal lacks a timeline, milestones, and a strong literature review. 😶 📌 Grant Application B: The project is solid—not groundbreaking—but has clear goals, detailed methods, and a well-defined impact statement. It uses existing research effectively to justify the study and wording which aligns with funders' expectations, showcasing both innovation and contribution to existing knowledge. 😊 So, who do you think is more likely to get the grant? 💡 The research idea is just one piece of the puzzle. How well it’s presented and its connection to existing research are key. Demonstrating the relevance and impact of your work can make all the difference in grant submissions. For those new to my content, I’m Ali Hindi, a lecturer at The University of Manchester. I share useful academic tips on LinkedIn and beyond. If you found this helpful, repost and follow me for updates on my latest posts. 👊 😎
-
Are your programs making the impact you envision or are they costing more than they give back? A few years ago, I worked with an organization grappling with a tough question: Which programs should we keep, grow, or let go? They felt stretched thin, with some initiatives thriving and others barely holding on. It was clear they needed a clearer strategy to align their programs with their long-term goals. We introduced a tool that breaks programs into four categories: Heart, Star, Stop Sign, and Money Tree each with its strategic path. -Heart: These programs deliver immense value but come with high costs. The team asked, Can we achieve the same impact with a leaner approach? They restructured staffing and reduced overhead, preserving the program's impact while cutting costs by 15%. -Star: High impact and high revenue programs that beg for investment. The team explored expanding partnerships for a standout program and saw a 30% increase in revenue within two years. -Stop Sign: Programs that drain resources without delivering results. One initiative had consistently low engagement. They gave it a six-month review period but ultimately decided to phase it out, freeing resources for more promising efforts. -Money Tree: The revenue generating champions. Here, the focus was on growth investing in marketing and improving operations to double their margin within a year. This structured approach led to more confident decision-making and, most importantly, brought them closer to their goal of sustainable success. According to a report by Bain & Company, organizations that regularly assess program performance against strategic priorities see a 40% increase in efficiency and long-term viability. Yet, many teams shy away from the hard conversations this requires. The lesson? Every program doesn’t need to stay. Evaluating them through a thoughtful lens of impact and profitability ensures you’re investing where it matters most. What’s a program in your organization that could benefit from this kind of review?
-
The wait is over! ISO 21001:2025 is here- Management Systems for Educational Organizations – Requirements with Guidance for Use Published in July 2025, this revision is a major step forward in shaping quality education worldwide. As #ProjectLeader, I’m honored to have contributed to this important work. Grateful to ISO - International Organization for Standardization for the guidance and to the Bureau of Indian Standards for their trust and support. What’s new in ISO 21001:2025? 1️⃣ Expanded vocabulary – Includes updated terms like evaluation, formative and summative assessment, and learner with special needs. 2️⃣ Sustainability integration – Climate change considerations embedded in the organizational context. 3️⃣ Clearer framework – Stronger distinction between learning delivery, assessment methods, and evaluation criteria. 4️⃣ Broader stakeholder classification – As detailed in Annex C. 5️⃣ Process mapping – Core education management processes mapped to requirements (Annex E). 6️⃣ Enhanced alignment with other standards– Updated examples of alignment with European standards, especially EQAVET (Annex F). 7️⃣ Digital/hybrid learning recognition - Enhanced guidance for virtual delivery, quality assurance, and data protection. What this means for education? ✅ More inclusive and equitable learning experiences ✅ Stronger sustainability and institutional resilience ✅ Robust digital and hybrid learning models ✅ Better data protection in an AI-enabled education ecosystem A heartfelt thank you to all the members of our dedicated working group WG7, our ISO TC 232 Technical Programme Manager Sally Swingewood, Chair Bill Rivers, and Committee Manager Fei HOU. I’m truly proud of what we’ve achieved together. Looking ahead: Let’s leverage ISO 21001:2025 to inspire learner-centered, future-ready education systems across the globe. #ISO21001 #EducationStandards #QualityEducation #SustainabilityInEducation #AIInEducation #isostandards
-
Impact evaluation is a crucial tool for understanding the effectiveness of development programs, offering insights into how interventions influence their intended beneficiaries. The Handbook on Impact Evaluation: Quantitative Methods and Practices, authored by Shahidur R. Khandker, Gayatri B. Koolwal, and Hussain A. Samad, presents a comprehensive approach to designing and conducting rigorous evaluations in complex environments. With its emphasis on quantitative methods, this guide serves as a vital resource for policymakers, researchers, and practitioners striving to assess and enhance the impact of programs aimed at reducing poverty and fostering development. The handbook delves into a variety of techniques, including randomized controlled trials, propensity score matching, double-difference methods, and regression discontinuity designs, each tailored to address specific evaluation challenges. It bridges theory and practice, offering case studies and practical examples from global programs, such as conditional cash transfers in Mexico and rural electrification in Nepal. By integrating both ex-ante and ex-post evaluation methods, it equips evaluators to not only measure program outcomes but also anticipate potential impacts in diverse settings. This resource transcends technical guidance, emphasizing the strategic value of impact evaluation in informing evidence-based policy decisions and improving resource allocation. Whether for evaluating microcredit programs, infrastructure projects, or social initiatives, the methodologies outlined provide a robust framework for generating actionable insights that can drive sustainable and equitable development worldwide.
-
Please do NOT start research on human subjects unless you have taken into account the ethics part. I beg you, please! 😂 I've encountered multiple cases of my mentees who started a project without the necessary approvals, and when it came to journal publication, they were stuck! Let's see what we need to get started 👇 1. Informed consent Ensures participants fully understand the research, its potential risks and benefits, and their right to withdraw without consequence (you must include this in your submission!) 2. Privacy and confidentiality Safeguarding participant data, including anonymization, encryption, and secure storage (you'll have to describe this in your method section.) 3. Vulnerable populations If research involves children, the elderly, prisoners, or those with cognitive impairments, additional measures protect their rights and well-being. 4. Benefit-risk assessment Potential benefits or risks to participants considering not only physical harm but also psychological and social impacts. 5. Data integrity and transparency Accurate data collection, analysis, and reporting. 6. Researcher bias and conflicts of interest Addressing personal biases and financial conflicts and transparent disclosure and mitigation strategies. 7. Cultural sensitivity Respecting diverse cultural values and beliefs AND, here comes the tough one 👇 8. Institutional review board (IRB) approval An approval letter generated by an IRB is compulsory for every single submission that involves research on human subjects. ___________________ 🔔 This is Dr. Samira Hosseini. Scholars who took my training published +2,000 articles in top-tier journals. Join my inner circle not to miss even one single bit of learning: https://lnkd.in/eVNSihCM
-
The CDC has updated its Framework for Program Evaluation in Public Health for the first time in 25 years This is an essential resource for anyone involved in programme evaluation—whether in public health, community-led initiatives, or systems change. It reflects how evaluation itself has evolved, integrating principles like advancing equity, learning from insights, and engaging collaboratively. The CDC team describes it as a “practical, nonprescriptive tool”. The framework is designed for real-world application, helping practitioners to move beyond just measuring impact to truly understand and improve programmes. I particularly like the way they frame common evaluation misconceptions, including: 1️⃣ Evaluation is only for proving success. Instead, it should help refine and adapt programmes over time. 2️⃣ Evaluation is separate from programme implementation. The best evaluations are integrated from the start, shaping decision-making in real time. 3️⃣ A “rigorous” evaluation must be experimental. The framework highlights that rigour is about credibility and usefulness, not just methodology. 4️⃣ Equity and evaluation are separate. The new framework embeds equity at every stage—who is involved, what is measured, and how findings are used. Evaluation is about learning, continuous improvement, and decision-making, rather than just assessment or accountability. As they put it: "Evaluations are conducted to provide results that inform decision making. Although the focus is often on the final evaluation findings and recommendations to inform action, opportunities exist throughout the evaluation to learn about the program and evaluation itself and to use these insights for improvement and decision making." This update is a great reminder that evaluation should be dynamic, inclusive, and action-oriented—a process that helps us listen better, adjust faster, and drive real change. "Evaluators have an important role in facilitating continuous learning, use of insights, and improvement throughout the evaluation (48,49). By approaching each evaluation with this role in mind, evaluators can enable learning and use from the beginning of evaluation planning. Successful evaluators build relationships, cultivate trust, and model the way for interest holders to see value and utility in evaluation insights." Source: Kidder, D. P. (2024). CDC program evaluation framework, 2024. MMWR. Recommendations and Reports, 73.
-
AI isn’t assisting science anymore. It’s 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝗻𝗴 it. But what if the 𝗮𝘂𝘁𝗵𝗼𝗿 𝗵𝗮𝘀 𝗻𝗼 𝗰𝗼𝗻𝘀𝗰𝗶𝗲𝗻𝗰𝗲? 𝗜𝘁 𝗳𝗮𝗸𝗲𝘀 𝗰𝗶𝘁𝗮𝘁𝗶𝗼𝗻𝘀. 𝗥𝗲𝘄𝗿𝗶𝘁𝗲𝘀 𝗳𝗶𝗻𝗱𝗶𝗻𝗴𝘀. 𝗗𝗿𝗮𝗳𝘁𝘀 𝗴𝗿𝗮𝗻𝘁𝘀. All before you blink. This isn’t progress. It’s precision without principle. Truth now comes 𝗽𝗿𝗲-𝘁𝗿𝗮𝗶𝗻𝗲𝗱. And peer review can’t keep up. We’re not 𝘀𝘁𝗿𝗲𝗮𝗺𝗹𝗶𝗻𝗶𝗻𝗴 𝘀𝗰𝗶𝗲𝗻𝗰𝗲. We’re 𝘀𝗵𝗼𝗿𝘁-𝗰𝗶𝗿𝗰𝘂𝗶𝘁𝗶𝗻𝗴 𝗶𝘁. And with no intervention, the tools don’t just drift, they 𝗱𝗶𝘀𝘁𝗼𝗿𝘁 𝘁𝗵𝗲 𝘃𝗲𝗿𝘆 𝗶𝗱𝗲𝗮 𝗼𝗳 𝘁𝗿𝘂𝘁𝗵. The European Commission’s whitepaper isn’t just regulation. It’s a firewall for scientific integrity. For those funding, governing, or scaling AI in research, it’s the baseline for trust, accountability, and future-proof discovery. It’s a must-read. And a call to act.....now. 🔸 Why These Guidelines Matter ➝ GenAI speeds discovery but magnifies risk. ➝ Disinformation and IP abuse are rising. ➝ Trust, transparency, and accountability are non-negotiable. 🔸 Guiding Principles ➝ Reliability: Keep research solid and reproducible. ➝ Honesty: Always disclose AI use. ➝ Respect: Protect data, people, and systems. ➝ Accountability: Humans remain responsible. 🔸 For Researchers ➝ Own every AI-supported output. ➝ Disclose tools used clearly. ➝ Don’t upload sensitive data. ➝ Cite properly. No plagiarism. ➝ Don’t use AI in reviews or evaluations. 🔸 For Research Organisations ➝ Train everyone across roles. ➝ Encourage disclosure without fear. ➝ Track how AI is used internally. ➝ Offer secure, local GenAI tools. ➝ Build this into your ethics policies. 🔸 For Funding Bodies ➝ Link funding to responsible AI use. ➝ Make disclosure a must. ➝ Ban AI in scientific reviews. ➝ Use GenAI responsibly in operations. ➝ Fund ethics training widely. 🔸Research Integrity ➝ Uphold ALLEA’s Code of Conduct: Quality Transparency Fairness Societal Responsibility 🔸Trustworthy AI Pillars ➝ Respect human autonomy ➝ Prevent harm ➝ Ensure fairness ➝ Prioritise explicability ➝ Ensure oversight, privacy, and transparency. 🔸 Evolving Together ➝ These guidelines will evolve. ➝ Updates will track tech and policy shifts. ➝ Community input is welcome. 🔸 Key Takeaways ➝ GenAI should support not steer research. ➝ Disclosure builds trust, not risk. ➝ Researchers, institutions, and funders must align. Bottom Line In research, credibility is everything. GenAI can support it but only when used with care, clarity, and conscience. Alex Wang Cobus Greyling Hr. Dr. Takahisa Karita Sarvex Jatasra Lewis Tunstall Martin Roberts, Michael Spencer Pascal BORNET Dr. Ram Kumar G, Ph.D, CISM, PMP Pavan Belagatti Rafah Knight JOY CASE Sara Simmonds Prasanna Lohar #AI #GenAI #AIinResearch #TrustworthyAI #EthicalAI #Research #Researchers 🔺 Looking to engage with insights that matter? 🔺 Follow Shalini Rao
-
It's right to be concerned about assurance of learning, but letting security drive assessment design is not the way forward. Nor is excessive focus on GenAI. These are just part of the array of factors that need to be considered. A suggested, GenAI-agnostic alternative: 1. Start with the kinds of knowledge you want to see (what is specified in learning outcomes, nuanced through careful discussion with the program teaching team*). 2..Work out how these might be demonstrated in ways that make sense for your disciplinary focus. 3. Think about how you might be able to see these being demonstrated (rather than assuming after the fact that they were demonstrated). 4. Ask yourself, under what circumstances would you be convinced by what you're seeing. 5. Think about how convincing demonstrations could be sampled over time (rather than putting all of the responsibility on snapshot assessments). 6. Try to design an assessment according to 1-5. 7. Ask yourself, does your design align with your educational values, the trust relations you want to promote between yourself and your students, and the kinds of graduates you want to cultivate. 8. Go back to step 1 til you're happy with step 7. * If you don't have a program teaching team, make one. You might be thinking: "but that's all really hard to do." Yes, it is. Assessment design is really hard. What do you think? How would you change these suggested steps?
-
The "Framework for Program Evaluation in Public Health," published by the CDC in 1999, provides structured steps and standards for conducting program evaluations effectively. This Framework, which is widely recognized globally, was shaped in alignment with the Program Evaluation Standards developed by the Joint Committee on Standards for Educational Evaluation. These standards emphasize that evaluations should be useful, practical, ethical, accurate, transparent, and economically sensible. The Framework is adaptable and not specific about the focus, design, or methods of evaluation, making it compatible with various international approaches, particularly in humanitarian settings. Key aspects of the Framework include: 1-Engaging stakeholders: Involving those affected by the program and those who will use the evaluation results. 2-Describing the program: Detailing the program’s needs, expected effects, activities, resources, development stage, context, and logic model. 3-Focusing the evaluation design: Clarifying the evaluation’s purpose, users, uses, questions, methods, and procedural agreements. 4-Gathering credible evidence: Ensuring data quality and addressing logistical issues related to data collection and handling. 5-Justifying conclusions: Analyzing data, interpreting results, and making recommendations based on established criteria and stakeholder values. 6-Ensuring use and sharing lessons learned: Planning for the use of evaluation results from the start, engaging stakeholders throughout, and effectively communicating findings. This comprehensive approach aids in enhancing program evaluation and accountability across diverse settings worldwide. #PublicHealth #CDC #ProgramEvaluation