This question kinda put me in a dilemma.
If you think it theoretically it should be possible if and only if your network has no non-linearities like ReLU and has weight matrices( plus bias) which is invertible, it is possible. However matrix multiplication is still a problem as your dimension increases.
But in practice, machine learning models tend to generalize the problem with the given input. So if you can build a model for f(x,y) then you can build another model for g(x,z) as the problem do not differ that much. If you think about complex models, it will not be practical. Also doing this does not make sense to me. There would be loss in the knowledge.
So even if you can do it, it requires lots of extra work. In my opinion it does not worth the prize. Going for a new model should be less painful.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…