There is a trivial O(n^2) algorithm, but you can do this in O(n). E.g.:
A = [a, b, c, d, e]
P = [4, 3, 2, 0, 1]
We can swap each element in A
with the right element required by P
, after each swap, there will be one more element in the right position, and do this in a circular fashion for each of the positions (swap elements pointed with ^
s):
[a, b, c, d, e] <- P[0] = 4 != 0 (where a initially was), swap 0 (where a is) with 4
^ ^
[e, b, c, d, a] <- P[4] = 1 != 0 (where a initially was), swap 4 (where a is) with 1
^ ^
[e, a, c, d, b] <- P[1] = 3 != 0 (where a initially was), swap 1 (where a is) with 3
^ ^
[e, d, c, a, b] <- P[3] = 0 == 0 (where a initially was), finish step
After one circle, we find the next element in the array that does not stay in the right position, and do this again. So in the end you will get the result you want, and since each position is touched a constant time (for each position, at most one operation (swap) is performed), it is O(n) time.
You can stored the information of which one is in its right place by:
set the corresponding entry in P to -1, which is unrecoverable: after the operations above, P will become [-1, -1, 2, -1, -1]
, which denotes that only the second one might be not in the right position, and a further step will make sure it is in the right position and terminates the algorithm;
set the corresponding entry in P to -n - 1
: P becomes [-5, -4, 2, -1, -2]
, which can be recovered in O(n) trivially.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…