
Beyond the Black Box: Securing Trust in Medical AI with Google Gemini
This is a submission for the Built with Google Gemini: Writing Challenge Beyond the Black Box: Securing Trust in Medical AI with Google Gemini What I Built with Google Gemini In a recent independent research sprint, I engineered an explainable multi-class lung cancer classification system using transfer learning and adversarial robustness analysis. The system classifies histopathology images into: Adenocarcinoma Squamous Cell Carcinoma Normal tissue The backbone architecture is EfficientNetB0 (ImageNet-pretrained), fine-tuned with: Selective unfreezing of upper convolutional layers Cosine Decay learning rate scheduling Stratified K-Fold Cross Validation Macro F1-Score, Precision-Recall Curve, and ROC-AUC evaluation Grad-CAM for visual interpretability Preliminary adversarial perturbation testing This was not an accuracy-focused experiment. It was an attempt to answer a more difficult question: How can we make medical AI not only accurate — but explainable, robust, and secure? In clinic
Continue reading on Dev.to
Opens in a new tab




