Let say I have an image n x m, and I do want to map it to 256 x m new matrix. All the elements in the j th column of the original image will be the j th column of the new matrix, but elements will be distributed to row according to its pixel intensity
range(0, 255) as its new index i. -matching index i in the new column j will be summed up then divide by
new_column.max().- In google search this representation can be found under various keywords such as, lumetri scopes, video scopes, video waveform, RGB parade, depending on which software is named it. I do not know what actually is the correct definition of it. Hence, I was not able to done a proper research on it. In fact, I couldn't go further than video editing blogs. It is a bit confusing to be honest. But, I did achieve it using two nested
r = #image red channel 1080 x 1080 scope = np.zeros((256, 1080)) for e, i in enumerate(r): for k, j in enumerate(i): intensity = r[k][e] scope[intensity][e] += 1 scope[::, e] /= scope[::, e].max()
,and it takes approximately 5 sec to calculate just for 1080 x 1080 image, not even mention to multiple channels. I am wondering if there is more quicker way to achieve this with numpy, scipy or other possible tools.