Wednesday, October 2, 2019

SPO600 Ramblings

Coincidentally when I first got together with my group during our first lab, we were all pretty confused. This class was called Software Optimization and Portability, I was with a few group members who didn't read the course outline and thought this course was on refactoring or just writing code in general, be it more efficient, cleaner... You get the point.

I actually read the course outline, yet I didn't know what to expect, was this going to be like my Open Source Development Class? Were we going to be taught a few basics and thrown to the wolves? Were we supposed to find a smallish open source application and port that to another platform than intended?

Furthermore we were even more confused when we were told to start programming in assembly for X86_64 and ARM64 architecture during our lab. At this point a few people in the group were already rethinking about their choice in enrolling for this course. I guess what was going in our minds was
HOW IS THIS HELPING US LEARN SOFTWARE OPTIMIZATION? and "WE DON'T EVEN KNOW WHAT WE'RE DOING IN THIS LAB, god help us".

It wasn't until recently during our lecture where it kind of came full circle. The professor gave us an example using a digital image. Say we have a picture on a resolution of 1920 x 1080 and each pixel contains 3 bytes (RGB) in the non optimized scenario we'll have about 6 million bytes to render. However if we analyze the picture, we could make a table of all the colors existing in the image reducing the need to hold data for shades of colors that are not used. 

Furthermore to reduce the size of the table, if we take pixels on the screen which are extremely close in shade, we can settle with either of the colors for both to further reduce the amount of colors needed for the picture, further reducing data in what is called psychovisual redundancy. I am assuming this is what happens when you choose to compress an image and depending on the compression the computer may be more/less aggressive with this technique.

Similar techniques are also applied to sounds such as song files, as we can discard or de-emphasize the data that is less important or where humans will not be able to tell the difference. By reducing the number of bits used to describe a sample of music, it is very much similar to reducing the number of colors for a digital image on a screen. Just like when compressing images, the bit rate compression determines how aggressive the computer may be with this technique.

How does efficiency relate to anything our professor then asked? One application he mentioned was battery life of mobile devices. These steps into optimizing performance and efficiency is what allows us to use our mobile devices longer. As processing any data takes power, less processing means less power used.

Ah.

No comments:

Post a Comment

Contains Duplicate (Leetcode)

I wrote a post  roughly 2/3 years ago regarding data structures and algorithms. I thought I'd follow up with some questions I'd come...