I have a python script and 2000 of csv files. the python script reads in csv file which contains stock price information, process it based on a series of calculations, output buy and sell decisions along with other information.
I am currently put a for loop on top of this script, so it processes 2000 files one by one. it is slow and could be stopped in the middle when processing the csv files for some reasons. Some of the stocks the data could be odd, that is one reason, but I don't really need all of the stocks be processed, majority is OK for research purpose.
I wonder is there a better way to do? I am thinking maybe if I run 2000 files in some parallel form may solve the problem? I did some research, but I am not sure exactly where and what to look at.
Please provide some thoughts. Thanks very much!