I'm parsing a webpage that lists all the usernames that I need. Unfortunately, when I scroll the page and load new data, the old data is still on the page. Once I start reaching 1000+ users on the page, the script starts to significantly slow down because it keeps starting from the top and adding whichever users were missing to my list. With that said, I really only need the new users that were just loaded onto the page to be added to my list, I do not need for it to constantly start from the beginning.
Here is the code:
Usernames = driver.find_elements_by_xpath('//*[@class="g-user-username"]')
My solution is to save the last user name that it uploaded onto my list into a variable, load more data and then search the page in reverse until it comes across this username at which point it should stop searching. I just don't know how to make my script search the webpage in reverse.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…