1. I meant to type "the answer to be affirmative and the convergence is in probability, possibly pathwise almost surely". In fact, the convergence in probability is uniform in every time interval.
2. The original question is: Question:
Does [$]y\rightarrow x_1[$] with respect to the trajectories in some sense, e.g. in probability or distribution, as [$]\epsilon\rightarrow0[$]?
It is asking whether there is a convergence in a sense as yet to be determined. It is an open question. What are your objections to the question?
3. Your, ISayMoo's, previous posts demand to know the distribution of [$]x(t)[$]'s and [$]y(t)[$] at every [$]t[$]. It is not clear what your objection is. Are you asking for the explicitly written expression of the distribution of [$]x(t)[$]'s and [$]y(t)[$] at every [$]t[$]? What is it for?
Do you object to what I said as follows? Given the initial conditions [$]x(t=0), x_0(t=0), y(t=0), x_1(t=0)[$], [$]x(t), x_0(t), y(t), x_1(t)[$] are all uniquely determined for every sample point and so the distributions of those variables at every [$]t[$].
The problem with pathwise convergence is the same as with pointwise convergence of functions - it's a weak form of convergence.
Weak compared to what form? The pointwise convergence implies convergence in probability. The gist of the proof is that the set of the sample point whose distance to the target exceeds the positive [$]\epsilon[$] infinitely many times in the sequence is a subset of the point-wise divergent set which is of measure zero. Therefore the former is stronger than the latter which in turn is stronger than convergence in distribution. I can write out the proof if you want to see it.
More importantly, how is this an objection to my original question, which asks for if there is a convergence and if there is, in what sense the convergence is. How can you object to a question. My upcoming answer provides one solution. What is the objection in that?
The rate of convergence will depend on what path you're on.
This is true. Nevertheless the rate of convergence is irrelevant. However varied the rate is, the random variables at those sample points will eventually converge. The set of all those converging sample points can not be in the set [$]U(\epsilon)[$] the point of which remains further than any given positive [$]\epsilon[$] away from the target infinitely many times in the sequence. Since the measure of point-wise convergent set is [$]1[$], the measure of [$]U(\epsilon)[$] has to be zero. Therefore the almost sure point-wise convergence implies convergence in probability.
So you don't know if the prices (or other expectations) computed for one process will converge to the prices computed for the other process, even if the processes convergence pathwise.
This is a non sequitur. The convergence of distribution is equivalent to the convergence of the expectation of all bounded continuous functions.