Skip to main contentDermUnbound
All Posts

One Year of Physician-Controlled AI: Lessons from the Clinic

Dr. Yehonatan Kaplan8 min read
reflectionclinical-ailessons-learned

Twelve Months In

One year ago, I deployed the first Docker-based AI tool in my Mohs surgery practice. Since then, the DermUnbound framework has grown to five tools, processed thousands of clinical images, and fundamentally changed how I practice. This post is not a victory lap. It is an honest accounting of what worked, what did not, and what I would do differently. The five lessons below are drawn from daily clinical use, not from theory or conference presentations.

Lesson 1: Trust Drives Adoption

The single most important factor in whether a clinical AI tool gets used is trust. Not model accuracy, not interface design, not speed -- trust. My surgical nurses started using MohsPedia's AI features only after I demonstrated, over several weeks, that the tool ran entirely offline. They needed to see with their own eyes that no patient data left the building. Once that trust was established, adoption was immediate. The lesson is clear: if you want clinicians to use AI, show them where the data goes. Better yet, show them it goes nowhere.

Lesson 2: Docker Simplified Everything

Before Docker, deploying AI tools in the clinic was a nightmare of dependency conflicts, version mismatches, and environment-specific bugs. Docker eliminated all of that. One command pulls the image. One command starts the tool. The container includes everything -- the model, the server, the dependencies, the configuration. When something breaks, you stop the container and start a fresh one. When a new version is ready, you pull the updated image. The operational simplicity of Docker is the single biggest technical insight from this year.

Lesson 3: Good Enough Beats Perfect

My first dermoscopy classifier had an accuracy of 84 percent on our internal test set. A cloud-based competitor advertised 96 percent. The temptation was to wait -- to keep training, keep tuning, keep chasing state-of-the-art benchmarks. But 84 percent accuracy on a local, private, always-available tool turned out to be more clinically useful than 96 percent accuracy on a cloud service that was unavailable during internet outages, introduced latency into the surgical workflow, and created compliance questions I could not answer.

Lesson 4: AI-Assisted Development is Real

I built the majority of the DermUnbound codebase using AI-assisted development -- describing features to coding assistants, reviewing the output, and iterating. This is not a parlor trick. It is a legitimate development methodology that allowed a full-time practicing physician to build and maintain five clinical tools. The code is not always elegant. Some functions could be refactored. But the tools work, they are tested in clinical practice, and they solve real problems. The perfect should not be the enemy of the working.

Lesson 5: Privacy is a Competitive Advantage

I expected privacy to be a constraint -- something that limited our options and made development harder. It turned out to be the opposite. Privacy-first architecture became our strongest differentiator. When colleagues ask about our AI tools, the first thing they want to know is where the data goes. When I tell them it never leaves the clinic, the conversation shifts from skepticism to interest. In a medical environment where data breaches make headlines and regulatory scrutiny is increasing, privacy is not a limitation -- it is the feature that opens doors.

Looking Ahead: Year Two

The second year of DermUnbound will focus on three priorities. First, making the Docker framework easier for other clinicians to adopt -- better documentation, simpler setup, and a one-click installer for non-technical users. Second, expanding the model portfolio to include wound assessment, surgical margin evaluation, and treatment response monitoring. Third, building a community of clinician-coders who share tools, validate models, and push the field forward together. The first year proved that physician-controlled AI is possible. The second year will prove it is scalable.