Research 1 Research 2 Research 3

AI does not only affect how we write. It changes how we handle data, how we interpret evidence, and how easily our work can travel into decision-making spaces.

That is why ethical research in the age of AI requires more than caution about “hallucinations”. It requires governance: clear boundaries, traceable decisions, and disciplined methods that do not collapse under pressure.

Data ethics: provenance, consent, privacy, and governance are not administrative

AI use often pushes researchers into casual relationships with data: scraping, uploading, reformatting, and sharing across tools with minimal attention to downstream harm.

Four boundaries matter:

1) Provenance
Where did the data come from? Under what conditions was it produced? What obligations follow from that origin—especially when knowledge has been taken from communities with little return?

2) Consent beyond tick-box logic
Publicly visible does not mean ethically available. Vulnerable groups and constrained environments require a harm lens, not a legalistic one.

3) Confidentiality and “no-upload zones”
If you would not place the raw data on a public website, you should not place it into third-party systems with opaque retention and training policies. De-identification is not a magic shield; re-identification is often easier than people assume.

4) Governance that stands up to scrutiny
Storage, access, versioning, retention, and team agreements are part of research integrity, not bureaucracy. In practice, governance is where ethics becomes enforceable.

Ethical research is not only about what you meant to do. It is about what your workflow makes possible—and for whom.

Qualitative research: assistance without analytic substitution

In qualitative work, the stakes sharpen, because meaning is not extracted like a mineral. It is argued—through context, reflexive judgement, and traceable interpretation.

AI can support:

  • transcription assistance (with quality checks),
  • organisation and retrieval,
  • candidate coding suggestions (clearly marked as candidates).

AI should not be used to:

  • “find themes” as if themes exist independent of interpretive judgement,
  • generate analytic claims without a transparent quote-to-claim chain,
  • replace reflective memos and human reasoning,
  • translate sensitive testimony without safeguards.

A defensible practice is quote-to-claim traceability:

  • What excerpt supports this interpretation?
  • What alternative interpretation could fit the same excerpt?
  • What contextual knowledge is required to read this properly?
  • What assumptions am I bringing into the analysis?
  • Who should be involved in validating my interpretation, and what would meaningful validation look like?

Rigour is not the absence of bias. It is the visibility of reasoning—and the willingness to be accountable for how interpretation is produced.

Writing, authorship, and transparency: keeping analysis human-owned

AI is often marketed as a writing partner. That framing can quietly become ghost-argumentation: outsourcing reasoning while retaining the name.

A clean distinction helps:

  • Acceptable support: clarity, structure, editing, language polishing, formatting.
  • Unacceptable substitution: generating the argument, producing conclusions, rewriting others’ ideas in ways that preserve their structure while disguising attribution.

“Paraphrase laundering” is not made ethical because it is difficult to detect. It remains plagiarism, and it weakens scholarship by separating claims from the intellectual labour and accountability that produced them.

Transparency is not performative disclosure. It is what a reader, participant, or reviewer would need to trust your method:

  • What AI was used, for which tasks?
  • What was verified, and how?
  • What was not verified?
  • What limitations follow from those decisions?

A practical operating rule: the AI Use Log

One of the most effective ways to protect integrity—without banning tools—is to build an audit trail that makes decision-making visible.

A minimal AI Use Log records:

  • the task (e.g., “summarise an article I provided”),
  • the prompt,
  • the output,
  • what you accepted or rejected (and why),
  • what you verified (and how),
  • what you changed after verification.

This does two things:

  1. It forces epistemic humility: you cannot treat outputs as neutral.
  2. It strengthens credibility: your research can be scrutinised without becoming mystified.

In higher-risk work, the log is not an administrative add-on. It is part of ethics.

The deeper question: what kind of research culture are we building?

AI adoption is often framed as individual productivity. Ethical research requires a wider frame: incentives, institutional defaults, and power.

Ask:

  • What pressures are making shortcutting feel normal?
  • Who bears the cost when errors travel?
  • Which forms of knowledge get flattened or erased?
  • What would responsible use look like here, given the political and organisational realities?
  • Are we treating research as a deliverable, or as a relational process of inquiry that requires dialogue, validation, and accountability

A stance, stated plainly

AI can be part of a responsible research workflow—if it is bounded by:

  • human ownership of inquiry and interpretation,
  • verification as method (not a final step),
  • clear rules on data sensitivity and no-upload zones,
  • non-extractive practice and accountability to affected communities,
  • transparency that meaningfully supports review and trust.

Used without those constraints, AI makes research look stronger while making it less defensible—and more capable of harm.

If you want to stress-test your current practice, start with one prompt to yourself:

What am I currently outsourcing that I would not feel comfortable defending in front of the people most affected by my conclusions?

That question is rarely comfortable. It is also where ethical research begins.

CTDC works with research institutions to embed ethical AI use across the research cycle—from data governance and methodological integrity to authorship, transparency, and harm prevention.

CTDC Academy’s forthcoming course, Research in the Age of AI, offers structured, practice-based learning for teams and professionals navigating these shifts.  
 

Reach to Us

Have questions or want to collaborate? We'd love to hear from you.

"

"