Sarah M. Hykin

Improving the Reliability of AI-Assisted Workflows

Practical methods for getting more consistent, auditable, and useful outputs from large language models—using tools teams already have.

What I Do

Most organizations adopting AI are seeing the same pattern:


Outputs are promising—but inconsistent.

Teams spend significant time verifying, correcting, and reworking results.


My work focuses on improving how people interact with large language models so that outputs are more reliable, more consistent, and more useful in real-world workflows.

Approach

Rather than introducing new tools, this approach focuses on how existing systems like Claude are used.


Key principles include:


* structuring interactions to improve reasoning quality

* breaking complex tasks into staged workflows

* maintaining continuity across multi-step analysis

* reducing variability across outputs


In practice, this leads to less rework, more consistent results, and better overall productivity.

Application

This approach has been applied to:


* clinical protocol analysis

* benchmarking and comparative workflows

* multi-step analytical tasks requiring consistent reasoning


These are environments where “almost correct” outputs still carry meaningful cost.

Why It Matters

As AI adoption scales, the limiting factor is no longer access to tools—it’s how effectively those tools are used.


Small changes in interaction structure can significantly improve:


* output quality

* consistency

* time-to-result

Contact

smhykin@uefoundation.ai