Research 1 Research 2 Research 3

AI does not only affect how we write. It changes how we handle data, how we interpret evidence, and how easily our work can travel into decision-making spaces.

That is why ethical research in the age of AI requires more than caution about “hallucinations”. It requires governance: clear boundaries, traceable decisions, and disciplined methods that do not collapse under pressure.

Data ethics: provenance, consent, privacy, and governance are not administrative

AI use often pushes researchers into casual relationships with data: scraping, uploading, reformatting, and sharing across tools with minimal attention to downstream harm.

Four boundaries matter:

1) Provenance
Where did the data come from? Under what conditions was it produced? What obligations follow from that origin—especially when knowledge has been taken from communities with little return?

2) Consent beyond tick-box logic
Publicly visible does not mean ethically available. Vulnerable groups and constrained environments require a harm lens, not a legalistic one.

3) Confidentiality and “no-upload zones”
If you would not place the raw data on a public website, you should not place it into third-party systems with opaque retention and training policies. De-identification is not a magic shield; re-identification is often easier than people assume.

4) Governance that stands up to scrutiny
Storage, access, versioning, retention, and team agreements are part of research integrity, not bureaucracy. In practice, governance is where ethics becomes enforceable.

Ethical research is not only about what you meant to do. It is about what your workflow makes possible—and for whom.

Qualitative research: assistance without analytic substitution

In qualitative work, the stakes sharpen, because meaning is not extracted like a mineral. It is argued—through context, reflexive judgement, and traceable interpretation.

AI can support:

AI should not be used to:

A defensible practice is quote-to-claim traceability:

Rigour is not the absence of bias. It is the visibility of reasoning—and the willingness to be accountable for how interpretation is produced.

Writing, authorship, and transparency: keeping analysis human-owned

AI is often marketed as a writing partner. That framing can quietly become ghost-argumentation: outsourcing reasoning while retaining the name.

A clean distinction helps:

“Paraphrase laundering” is not made ethical because it is difficult to detect. It remains plagiarism, and it weakens scholarship by separating claims from the intellectual labour and accountability that produced them.

Transparency is not performative disclosure. It is what a reader, participant, or reviewer would need to trust your method:

A practical operating rule: the AI Use Log

One of the most effective ways to protect integrity—without banning tools—is to build an audit trail that makes decision-making visible.

A minimal AI Use Log records:

This does two things:

  1. It forces epistemic humility: you cannot treat outputs as neutral.
  2. It strengthens credibility: your research can be scrutinised without becoming mystified.

In higher-risk work, the log is not an administrative add-on. It is part of ethics.

The deeper question: what kind of research culture are we building?

AI adoption is often framed as individual productivity. Ethical research requires a wider frame: incentives, institutional defaults, and power.

Ask:

A stance, stated plainly

AI can be part of a responsible research workflow—if it is bounded by:

Used without those constraints, AI makes research look stronger while making it less defensible—and more capable of harm.

If you want to stress-test your current practice, start with one prompt to yourself:

What am I currently outsourcing that I would not feel comfortable defending in front of the people most affected by my conclusions?

That question is rarely comfortable. It is also where ethical research begins.

CTDC works with research institutions to embed ethical AI use across the research cycle—from data governance and methodological integrity to authorship, transparency, and harm prevention.

CTDC Academy’s forthcoming course, Research in the Age of AI, offers structured, practice-based learning for teams and professionals navigating these shifts.  
 

Reach to Us

Have questions or want to collaborate? We'd love to hear from you.

"

"