You are performing an optimization of an approximation to a non convex function, with the wrong function approximator, based on a finite sample of noisy observations. The global minimum is overfit by definition.
Lack of robustness isn't cause by stochastic gradient descent. If the problem is ill posed it's ill posed.
If you're worried about perturbed images then perturb the images in the training set. If your fit is on a particular sample type why expect it to extrapolate correctly? How is that different from fitting a linear regression and extrapolating the line of best fit?
Ethics? Where's the ethics in linear regression software? There's vast swathes of influential and dishonest propaganda based on regression which affects enormous numbers of people based on spurious regressions.
Where's the ethics in significance testing? No statistics textbook is complete without examples of doctors not understanding p values or Bayes rule.
Psychology? The journals and popular psychology books are full of results that don't replicate. Most of their results are false. Taleb had a recent argument with some of them on twitter, and none of them actually understand correlation to a first year stats level.
The ethics issue is a red herring. That's a political battle among the usual technocrats and rent seekers.