The Center for Human-AI Innovation in Society

CHAIS Newsletter September 16, 2025

1. Research-to-Practice Seminar
Research Collaborations with Fairfax Fire Department
📅 Tuesday, September 30 | 10:00 a.m.–12:00 p.m.
📍 Hybrid (In-person: Fuse Room 6333 | Online: Zoom link: https://gmu.zoom.us/j/92424464450) 

CHAIS’ Craig Yu and Myeong Lee will lead a discussion showcasing their collaboration with the Fairfax Fire Department. They will share key research breakthroughs, lessons learned from the partnership, and perspectives on working with local government. Representatives from Fairfax Fire Department will also reflect on the partnership—its challenges, impact, and vision for the future. 

If you are interested in developing partnerships with local governments, engaging in research-to-practice collaborations, or becoming part of the ongoing partnership with Fairfax Fire Department, we encourage you to join this event. While we provide participating information above, please sign up and let us, the organizers, know if you plan to attend here – https://forms.office.com/r/wWXHv0cpRh 

Check update and agenda of this seminar here – https://chais.gmu.edu/event/research-to-practice-seminar/ 

2. Leadership & Engagement Opportunities at CHAIS 

Special thanks to Alan Shark, who has agreed to serve as CHAIS’ first Outreach Committee Chair. This committee will design and lead outreach efforts to share the research of CHAIS faculty and build industry and government relationships, including through in-person engagements, webinars, websites, and social media. 

We are seeking two additional members to join the Outreach Committee. If you enjoy outreach activities and have strong internal or external networks, we encourage you to join Alan in this important effort. 

We are also seeking members to take on leadership roles in other CHAIS initiatives—such as a Board of Advisors, peer-mentoring programs, a proposal development working group, and more. If you are interested in contributing to these initiatives, or launching a new one, we look forward to hearing from you. 

3. Upcoming Event: CyberAI Summit 2025: AI, Careers, and IBM 

📅 Monday, September 22, 2025 | 10:00 a.m.–3:00 p.m. (lunch provided)
📍 1201 Merten Hall, 4441 George Mason Blvd, Fairfax, VA 22030
(Online participation available) 

 Registration is required. Learn more and register here –https://crc.gmu.edu/event/gmu-cyberai-summit-ai-careers-and-ibm/ 

This one-day summit will explore how AI, cybersecurity, and enterprise infrastructure are shaping the future of technology. Participants will gain hands-on experience with IBM platforms, including LinuxONE – a free Linux virtual machine for coursework, projects, and research, available for faculty and students. In-person attendees can earn an exclusive badge.  

Please share with your colleagues and interested students. Hope to see you there.  

 4. Women Executives in Tech Circle 

Peng is coordinating the Women Executives in Tech Circle sponsored by Mason’s Cyber Resilience Center (CRC) which brings together a cohort of women leaders in technology for peer mentoring, shared learning, and discussions with experts in tech and leadership. 

If you or someone you know may be interested, please find more information and join here: – https://crc.gmu.edu/women-executives-in-tech-circle/ 

 5. Faculty Research Spotlight 

 Thema Monroe-White 

Thema’s co-authored work entitled: “Social Networks and Entrepreneurial Outcomes of Women Founders: An Intersectional Perspective” received the Best Paper Award this past July at the 2025 Diana International Research Institute Conference in Auckland, New Zealand. 

Thema presented her co-authored research entitled: “Echoes of Eugenics: Tracing the Ideological Persistence of Scientific Racism in Scholarly Discourse” at the 29th Annual International Conference on Science and Technology Indicators Conference(STI-ENID) this September in Bristol, UK. This project utilizes machine leaning, natural language processing (NLP) techniques to trace ideological bias in scholarly publications over time. 

New publication: Shieh, E., & Monroe-White, T. (2025, August). Teaching Parrots to See Red: Self-Audits of Generative Language Models Overlook Sociotechnical Harms. In Proceedings of the 2025 AAAI Summer Symposium Series (Vol. 6, No. 1, pp. 333-340). https://ojs.aaai.org/index.php/AAAI-SS/article/view/36070  

 Craig Yu 

Craig in the news: 

Craig’s recent award: Mason 2025 Innovators Award (Digital Innovation) 

Craig’s recent publications:  

  • Charles Ahn, Ashaki SetepenRa-Deloatch, Ubada Ramadan, Quang Vo, Jacob Matthew Wojtecki, Nathan Alan Moy, Ching-I Huang, Bo Han, Songqing Chen, Carley Fisher-Maltese, Lap-Fai Yu, Mohamed Alburaki“Teleoperated 360 Video Capture of Beehives for Scientific Visualization in VR”, Research Demo, ACM Symposium on Virtual Reality Software and Technology (VRST), 2025  
  • William Ranc, Thanh Nguyen, Liuchuan Yu, Yongqi Zhang, Minyoung Kim, Haikun Huang, Lap-Fai Yu “Multi-Player VR Marble Run Game for Physics Co-Learning”, Research Demo, IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2025 
  • Changyang Li, Qingan Yan, Minyoung Kim, Zhan Li, Yi Xu, Lap-Fai Yu“Crafting Dynamic Virtual Activities with Advanced Multimodal Models”, IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2025 

To be featured in future faculty research spotlight, please submit your achievements (grants, publication, awards, recognition) you received during the last three months here – https://forms.office.com/r/jh04Hi3b58 

6. Student Research Spotlight 

 We are adding a new feature to the CHAIS website to showcase current research topics and projects by doctoral students. Please encourage your students to submit a 300–500 word research description for consideration. Selected submissions will be published on CHAIS.gmu.edu along with a short bio and headshot. Projects that are highly relevant to CHAIS, demonstrate interdisciplinary thinking, strong research design, and potential for broad impact will be given priority. 

Please invite qualified students to submit here – https://forms.office.com/r/wVQ0K0u7D3 

7. Funding Opportunities 

NSF 

  1. AI Featured funding overview – https://www.nsf.gov/focus-areas/artificial-intelligence#featured-funding-13c

2.Seed fund on AI – https://seedfund.nsf.gov/topics/artificial-intelligence/ 

NIH 

AI featured funding overview – https://datascience.nih.gov/artificial-intelligence 

Bridge2AI – https://commonfund.nih.gov/bridge2ai 

NEH 

AI featured funding overview – https://www.neh.gov/AI 

Humanities Research Centers on AI – https://www.neh.gov/program/humanities-research-centers-artificial-intelligence 

Department of Education 

AI featured funding guidance – https://www.ed.gov/about/news/press-release/us-department-of-education-issues-guidance-artificial-intelligence-use-schools-proposes-additional-supplemental-priority 

SBIR (eligibility – small business) – https://ies.ed.gov/funding/research/programs/small-business-innovation-research-sbir/solicitation-information 

DoD 

AI Next Campain – https://www.darpa.mil/research/programs/ai-next-campaign 

DAF AI Accelerator Fellowship – https://www.aiaccelerator.af.mil/Phantom-Program/ 

Run by the U.S. Air Force and MIT, this fellowship program places selected “Phantoms” into AI research teams to: 

Work on real-world DoD AI projects. 

Receive advanced AI training. 

Influence acquisition and policy for ethical AI deployment. 

It’s a five-month immersive experience for military and civilian personnel focused on AI innovation and implementation. 

DAF AI Launch Point –https://www.dafcio.af.mil/AI/  

This is the central AI innovation hub for the Department of the Air Force. It supports: 

AI strategy and policy development. 

Cross-agency collaboration on AI R&D. 

Launching new AI pilot programs and partnerships. 

DoE 

Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics, Isotope R&D and Production, and Accelerator R&D and Production – https://science.osti.gov/grants/FOAs/-/media/grants/pdf/foas/2024/DE-FOA-0003432.pdf 

Private Sector / Philanthropy: 

Google: https://research.google/ > https://research.google/programs-and-events/google-academic-research-awards/ 

 Please let us know other opportunities to include in the next CHAIS newsletter.  

 8. CHAIS Listserv 

If you do not want to be on this listserv, please let us know. Please also let us know if you want to invite someone on the listserv. 

 

AI auditing AI: Towards digital accountability

 

By Alan Shark

This article was originally published by Route Fifty Republished here with the author’s permission.

 

Artificial intelligence systems are now making decisions in policing, hiring, healthcare, cybersecurity, purchasing and finance — but errors or biases can have significant consequences.

Humans alone can’t keep up: models are too complex, too fast, too large in scope. And yet, nearly every AI policy states humans must provide oversight and control. Keeping up with advancements in AI applications is almost impossible for humans. Worse, some admit to over-reliance on AI applications. This is where the idea of AI systems designed to check other AI systems comes in.

Traditionally, humans have performed this role. Auditors, compliance officers, regulators and watchdog organizations have long worked to ensure systems operate as intended. But when it comes to AI, humans alone may no longer be enough. The models are too complex, too fast, and too embedded in decision-making pipelines for manual oversight to keep pace.

That’s why researchers and practitioners are turning to an intriguing solution: using AI itself to audit AI. Recognizing the impact of AI on government applications, in 2021, the Government Accountability Office developed an ahead-of-its-time report, “Artificial Intelligence — An Accountability Framework for Federal Agencies and Other Entities.” Although the framework was practical and far-reaching, it still relied on human planning and oversight.

Today, we are entering a new area of AI accountability with talk about the advent of “watchdog AIs” or “AI auditors” that test, verify and monitor other AI models. This is increasingly important as AI grows more complex and less transparent to human reviewers.

Making the case for AI auditing, we can safely assume that AI can rapidly analyze outputs across millions of data points. And unlike human auditors, AI doesn’t get tired or overlook details. Auditing can occur in real-time, and flag problems as they arise.  AI auditors can probe “black box” models with tests humans couldn’t do manually. Taken together, AI auditing strengths can be summarized by its ability to scale, provide consistency, speed, transparency, and accuracy.

Auditing AI is not a single technology but a suite of methods. Some of the most promising approaches include:

  • Adversarial testing: One AI generates tricky edge cases designed to trip up another AI, exposing blind spots.
  • Bias and fairness detection: Auditing systems measure outcomes across demographic groups to reveal disparities.
  • Explainability tools: Specialized models analyze which factors most influenced a decision, helping humans understand why a model reached its conclusion.
  • Continuous monitoring: AI can watch for “model drift” — when performance degrades over time as data or circumstances change — and signal when retraining is needed.

In many ways, this mirrors how cybersecurity works today, where red teams and intrusion-detection systems constantly test defenses. Here, the target is not a firewall but another algorithm.

Real-world applications are emerging, though still in its early stages, AI auditing is moving beyond theory. Here are several examples:

  • Finance: Some firms are already deploying AI to double-check fraud-detection models, ensuring that suspicious activity flags are consistent and not biased.
  • Healthcare: AI-driven validation tools are being used to test diagnostic algorithms, checking their accuracy against known patient outcomes.
  • Cybersecurity: “Red team” AIs are being trained to attack models the way hackers might, helping developers harden systems before release.
  • Public sector pilots: Governments are beginning to experiment with algorithmic auditing programs, often in regulatory “sandboxes” where new models are tested under close supervision

These examples suggest a growing recognition that human oversight must be paired with automated oversight if AI is to be trusted at scale. At the same time, we must acknowledge AI auditing risks and limitations raise their own set of challenges. This includes the following:

  • The infinite regress problem: If one AI audits another, who audits the auditor? At some point, humans must remain in the loop. Or perhaps there might be a third level of AI checking on AI, checking on AI.
  • Shared blind spots: If both models are trained on similar data, they may replicate the same biases rather than uncover them.
  • Over-trust: Policymakers and managers may be tempted to rely too heavily on “AI-certified AI” without questioning the underlying process.
  • Resource costs: Running parallel AI systems can be expensive in terms of computing power and energy consumption.

In short, as tempting as it may appear, AI auditors are not a panacea. They are tools—powerful ones, but only as good as their design and implementation.

This raises critical governance questions. Who sets the standards for AI auditors? Governments, industry consortia, or independent third parties? Should auditing AIs be open-source, to build public trust, or proprietary, to protect against exploitation? And how do we ensure accountability when the auditors themselves may be opaque? Can or should AI auditing be certified, and if so, by whom?

There are strong arguments for third-party, independent auditing — similar to how financial auditing works today. Just as markets rely on trusted external auditors, the AI ecosystem will need its own class of independent algorithmic auditors. Without them, self-auditing could resemble letting the fox guard the henhouse.

Most experts envision a layered approach: humans define auditing standards and interpret results, while AI handles the heavy lifting of large-scale checking. This would create multiple levels of defense — primary AI, auditing AI and human oversight.

The likely result will be a new industry built around AI assurance, certification, and compliance. Just as accounting gave rise to auditing firms, AI may soon give rise to an “AI auditing sector” tasked with keeping digital systems honest. And beyond the technical details lies something more important: public trust. The willingness of people to accept AI in critical domains may depend on whether robust and credible audit mechanisms exist.

AI auditing AI may sound strange at first, like machines policing themselves. But far from being a case of “the fox guarding the henhouse,” it may prove essential to making AI safe, reliable and trustworthy. The truth is, humans cannot realistically keep up with the scale and complexity of today’s AI. We need allies in oversight — and in many cases, the best ally may be another AI. Still, human judgment must remain the final arbiter.

Just as financial systems depend on auditors to ensure trust, the AI ecosystem will need its own auditors—both human and machine. The future of responsible AI may well depend on how well we design these meta-systems to keep each other in check.

 

 

Dr. Alan R. Shark is a senior fellow at the Center for Digital Government and an associate professor at the Schar School for Policy and Government, George Mason University, where he also serves as a faculty member at the Center for Human AI Innovation in Society (CHAIS). Shark is also a senior fellow and former Executive Director of the Public Technology Institute (PTI). He is a Fellow of the National Academy of Public Administration and Founder and Co-Chair of the Standing Panel on Technology Leadership. Shark is the host of the podcast Sharkbytes.net.

CAHMP Summer 2025 GRA Fellowship Now Open for Applications

Call for Applications: CAHMP Summer 2025 GRA Fellowship   

We are inviting applications for a limited number of summer GRAs from Mason doctoral students who do not already have Mason summer funding. If accepted, each awardee will receive a stipend of $9,000 this summer. Applying students should provide a short summary of what project they will be working on over the summer. While we will consider all projects concerning human-machine partnerships, special consideration will be given to transdisciplinary projects centering around CAHMP thematic thrusts: 

  • Fairness, Accountability, Transparency, Inclusion, and Equity in AI 
  • AI Policy and Governance  
  • Human-Computer Interaction (human-machine teaming) 
  • Assistive Technology  
  • Learning Technology 
  • Community Informatics  
  • Generative AI and applications 

 (including AR/VR, computer vision, wearable tech, robotics, cyber, data science, digital humanities, etc.) 

Recipients of this summer funding may be invited to present their projects at a CAHMP social event during the next academic year. 

Application Instructions 

Applicants, please submit the following here – https://forms.office.com by 11:59pm Thursday, March 27th, 2025:

  1. A project title and a 250-word proposal that describes your project.
  2. A CV.
  3. One of the faculty mentors must be a CAHMP faculty member. Applicants are encouraged to have more than one mentor from different departments. 
Eligibility requirements 

–Applicants must be Mason doctoral students listed in university records as full-time during Spring 2025 and must plan to return as a doctoral student in Fall 2025.  

–GPA 3.0 and good standing  

–Receive no other funding from Mason over the summer. Faculty mentors will be asked to provide confirmation that the applicant has no other funding at Mason over the summer.  

–This fellowship is more for doctoral students who are well on their way in their degree completion.

  

AI and Data-Driven Decision-Making for Education Policy and Equity: Convening at George Mason University Highlights Key Insights for School Leaders


Fairfax, VA – George Mason is bridging research and practice on AI for education. On October 8, 2024, leading educators, policymakers, and AI experts gathered at George Mason University for the “AI and Data-Driven Decision-Making for Education Policy and Equity” convening. The event, held at Merten Hall on GMU’s Fairfax campus, focused on the rapidly growing role of AI in education and its potential to reshape school systems, streamline administrative tasks, promote data-driven decision-making, and prepare teachers and students for an AI-enabled learning environments.

At 9am, the fully packed Merten Hall Room 1201 was welcomed by Mason’s professor of education, Anne Holton, who together with Dean of College of Education and Human Development, Ingrid Guerra- López, and Mason’s inaugural Chief AI Officer, Amarda Shehu, opened up the event with inspiring open remarks. The event followed with a wide range of expert presentations and panel discussions. Roberto J. Rodriguez, Assistant Secretary for the Office of Policy Planning and Development at the U.S. Department of Education, provided insights on how federal education policy is responding to the rise of AI. David Myers, Deputy Superintendent and CIO of the Virginia Department of Education, shared their perspectives on how AI can be responsibly implemented to address specific challenges in education.

Throughout the day, panelists and experts provided key insights for school and district leaders, underscoring the importance of thoughtful AI integration. School leaders are increasingly adopting AI tools to reduce administrative burdens and enhance classroom efficiency. However, speakers cautioned that the implementation of these tools must prioritize student privacy and compliance with existing data governance policies, such as FERPA. Districts were encouraged to evaluate their current data infrastructure, ensure that AI tools are compatible with privacy regulations, and carefully pilot these technologies before implementing them on a larger scale.

The event also highlighted the importance of providing teachers with professional development to navigate the integration of AI in their work. This includes training on the safety and privacy aspects of AI tools, particularly when compared to widely available, free alternatives, and on how AI can be used to enhance teaching and learning. GMU faculty showcased promising research on the ways AI can enrich classrooms, such as generating differentiated learning materials and serving as virtual collaborators for students in subjects like mathematics. Still, attendees were reminded that while AI offers exciting opportunities, its integration must be done thoughtfully to avoid unintended consequences, particularly with regard to data privacy and educational equity.

One of the key concerns discussed was the potential for AI algorithms to perpetuate societal biases present in the data they are trained on. Several experts stressed that AI literacy curricula must address these limitations, ensuring that both students and teachers are aware of the challenges associated with AI-generated materials, including biased content and harmful stereotypes. This is a crucial aspect of preparing students to critically engage with AI, rather than accepting its outputs at face value.

Participants explored case studies where AI is already being used to solve problems in school systems, such as improving transportation logistics and enhancing lesson planning. The event concluded with a discussion on fostering collaboration between school districts and researchers, encouraging knowledge sharing and the development of AI strategies that can be scaled across different educational contexts. As a result of this convening, a community of practice will carry out the much-needed ongoing conversations.

The convening, organized by the AI and Data-Informed Education Policy Initiative (AIEP) at George Mason University, exemplified the university’s commitment to transdisciplinary collaboration and evidence-based policy development. AIEP, in partnership with EdPolicyForward and the Center for Advancing Human-Machine Partnership (CAHMP), leverages AI and advanced data analytics to address pressing challenges in education policy and equity.

CAHMP Summer Doctoral GRA Research Presentation

 

 

Time: 1-3pm, Monday, 9/9

Zoom: https://gmu.zoom.us/j/94054653093

Presentation Details: Each student has up to 8 minutes to present their summer research project. A 2-minute Q&A and feedback session can follow each presentation.

Name Area of Study Presentation Title
1 1:10 1:20 Chahat Raj Computer Science Social Prejudice in Multimodal Generative AI through Visual Question Answering
2 1:20 1:30 Neelam Shukla Public Policy AI and the Future of Taxation
3 1:30 1:40 Liuchuan Yu Computer Science A Collaborative Construction Platform in Virtual Reality
4 1:40 1:50 Minyoung Kim Computer Science A Multimodal Framework for Synthesizing Interactive and Adaptive Narratives in Augmented Reality
5 1:50 2:00 Hao Yan Computer Science Can Code LLMs Be Easily Exploited to Produce Vulnerable Code?
6 2:00 2:10 Irene Feng Human Factors and Applied Cognition Ethical vs. Unethical AI Advisors
7 2:10 2:20 Fairuz Nawer Meem Computer and Information Science Underlying motivations for the adoption of ChatGPT in Software Development tasks
8 2:20 2:30 Gaurab Pokharel Computer Science EvoFair: Navigating the Dynamics of Fairness Evolution
9 2:30 2:40 Md Shafkat R Farabi Computer Science Modelling Fair Assignment of Mediators in Kenya using Multi Armed Bandits
10 2:40 2:50 Vasilii Nosov Public Policy Quality of Regulation and the AI Infrastructure  Depth

.

Podcast: LLMs, Learning, and the Law: Navigating the Opportunities and Challenges of AI in Education

2024 Transdisciplinary Center Summer GRA Fellowship now Open for Application

Call for Applications: CAHMP Summer 2024 GRA Fellowship   

We are inviting applications for summer research assistance funding from Mason Master’s and doctoral students who do not already have Mason summer funding. If accepted, each awardee will receive a stipend of $6,500 (Master’s) or $8,500 (Doctor’s). Applying students should provide a short summary of what project they will be working on over the summer. While we will consider all projects concerning human-machine partnerships, special consideration will be given to transdisciplinary projects centering around CAHMP thematic thrusts: 

  • Fairness, Accountability, Transparency, Inclusion, and Equity in AI 
  • AI Policy and Governance  
  • Human-Computer Interaction (human-machine teaming) 
  • Assistive Technology  
  • Learning Technology 
  • Community Informatics  
  • Generative AI and applications 

 (including AR/VR, computer vision, wearable tech, robotics, cyber, data science, digital humanities, etc.) 

Recipients of this summer funding may be invited to present their projects at a CAHMP social event during the next academic year. 

Application Instructions 

Applicants, please submit the following here – https://forms.office.com/r/QQKJVUR4hx by April 14th, 2024:

  1. A project title and a 250-word proposal that describes your project; 
  2. The student’s level of study (Masters of doctoral), home department, and faculty mentor(s), one of whom must be a CAHMP core or affiliate faculty member; 
  3. A CV. 

Mentors, please provide your quick endorsement here – https://forms.office.com/r/RTiRimH1k3 by April 14th, 2024. The endorsers must confirm that there are no alternative means of supporting the applicant over the summer.  

Eligibility requirements 

–Applicants must be Mason students listed in university records as full-time during Spring 2024 and must plan to return as a graduate student in Fall 2024.  

–GPA 3.0 and good standing  

–Receive no other funding from Mason over the summer  

–We will give preference to new applicants who have not received previous CAHMP GRA support.  

  

 

CAHMP Receives Funds for Collaborative Research in Ed Policy and Artificial Intelligence

Pictured above are the research team with FCPS Superintendent, Michelle Reid and FCPS CIO, Gautam Sethi. (From left to right, Michelle Reid, Anne Holton, Sanmay Das, Peng Warweg, Seth Hunter, and Gautam Sethi)

 

Two centers at George Mason University—EdPolicyForward in the College of Education and Human Development and the Center for Advancing Human Machine Partnership (CAHMP), a transdisciplinary research center—will collaborate on a project that aims to better connect research to practice in education policymaking, and support school divisions in improving student and teacher performance and enhancing the effectiveness and fairness of resource allocation through data-driven decision making.

The collaboration is supported by a generous gift from the William and Flora Hewlett Foundation is a nonpartisan, private charitable foundation that advances ideas and supports institutions to promote a better world. The collaboration between EdPolicyForward and CAHMP manifests the power and opportunities that transdisciplinary research can bring to Mason’s research and practice community.

The Education Program at Hewlett Foundation makes grants to help educators, schools and communities turn schools into places that empower and equip students for a lifetime of learning and to reach their full potential.

The core Mason project team includes Sanmay Das, professor of computer science and co-director of CAHMP; Anne Holton, professor of public policy and education; Seth B. Hunter, assistant professor of education leadership; David Houston, director of EdPolicyForward and assistant professor of education; and Peng Warweg, assistant director of CAHMP.

The project will leverage the core team’s expertise in education policy research, artificial intelligence, and advanced data analytics, as well as partnerships with local school divisions, to better connect research to practice, with the goal of helping all students succeed through efficient, effective, and equitable use and distribution of school resources.

Over the next 16 months, the team will deliver a series of policy briefs to communicate relevant research findings to state and local leaders on matters of importance to their decision-making, creating a two-way dialogue between researchers and policymakers, and a dissemination strategy to inform the policymaking decisions of local, state, and national audiences.

The project is also expected to enhance the capacity for data-driven decision-making in school divisions, especially in the use of methods from artificial intelligence (AI) and advanced data analytics to support and enhance equity in access to programs that benefit student outcomes and improve student and teacher performance.

EdPolicyForward, the Center for Education Policy, promotes equity and improved educational outcomes for all students, from preschool through college and beyond. The center connects research to policy and practice, develops and advances effective and pragmatic solutions, and drives meaningful public discourse addressing persistent inequalities in U.S. public education.

Founded in 2019, the transdisciplinary CAHMP facilitates research, teaching, and innovation in understanding and leveraging human-machine partnership to solve current and emerging societal problems. CAHMP’s network of researchers work on cross-disciplinary topics, such as responsible AI, learning technology, education policy and analytics, and generative AI for education.

2023 AI and Tech Policy Summer Institute Opening Reception

2023 CAHMP Mini Seed Funding

Overview and Purpose

The purpose of this mini seed funding is to enable pilot research that will lead to submissions to externally funded research programs in the broad area of “human machine partnership.” We seek to identify and support interdisciplinary teams planning to pursue novel convergent research. The seed grant can support new or existing teams to perform activities, such as faculty effort, preliminary research, team building, and project scoping, that are necessary to enable the submission of a competitive larger proposal in the future. Funded projects should lead directly to an external proposal submission as a next step.

Investigator Specification

Proposals should come from teams of at least two investigators from at least two different departments or disciplines. CAHMP Affiliate members must collaborate with a Core member to submit an application. In addition to listing the investigators, proposals should identify additional faculty members, disciplines, or external partners, if applicable, that the team will engage during the seed grant period.

Activities Eligible for Support

The mini seed grants aim to support a range of planning activities intended to foster a convergent research team that can effectively integrate multiple disciplinary perspectives, explore the research theme in depth, build collaborations with relevant stakeholders, and hone specification of research gaps, questions, and hypotheses. Activities within scope include, but are not limited to, workshops, stakeholder meetings, literature reviews, data collection, preliminary experiments, prototypes, and pilots. In all cases, the proposed activities should be designed as a step along the path to a future external grant.

Budget Guidelines

Teams should propose a budget commensurate with the scope of their proposed activities and their intended next steps. The maximum allowable budget for a proposal is $20,000. The total requested amount, as well as the individual line items, should be clearly justified in the budget section of the proposal.

Eligible budget categories include faculty time (summer salary and/or AY course buyout), student assistantships and/or wages, travel (for team building), equipment, materials & supplies, and participant support costs. There is no need to budget indirect costs.

Funds will be transferred to the PI’s home department. Unspent funds must be transferred back to CAHMP within 45 days of the project’s end date. Project periods should end before June 30, 2024.

Proposal Content

Proposals should be no more than three (3) single-spaced pages (please use the template provided attached), using Times New Roman 12-point font. Proposals should address the following topics:

  • Definition of the research theme/area and motivation for its importance
  • History and current status of the research theme (literature/conceptual review)
  • Vision for development of the research theme
  • Qualifications of the interdisciplinary team
  • Specific external grant objective and target source
  • Timeline (projects should not extend past June 30, 2024.)
  • Budget and budget justification

Review Criteria

  • Strength and potential of the interdisciplinary team
  • Importance and potential of the research theme
  • Opportunity for the team to create a distinct and compelling research niche
  • Clarity and feasibility of the plan to achieve external funding
  • Overall potential return on investment
  • Relevance of the theme to the vision and interests of CAHMP and Mason

Submission Process

Email a PDF copy of your submission to Peng Warweg at [email protected] by 5:00 p.m. EDT on Wednesday, May 31, 2023.

Proposals will be reviewed, and funding decisions will be made by the CAHMP leadership team and external advisors by no later than June 30, 2023 (earlier if possible).