Tuesday, November 5, 2019

SPO600 Lab4 (Algorithm Selection) Continued

Building on top of the previous post, the second task of the lab was to create a look-up table with all the samples already scaled.

Refer to the code below to see some changes to the original code:

 // since we can't use negative index we start at the lowest value
        int16_t val = -32768;

        // create and populate array with scaled sample values
        for (int i = 0; i < size; i++){
                lookupTable[i] = scale_sample(val, 0.75);
                val++;
        }

        // Allocate memory for large in and out arrays
        int16_t*        data;
        data = (int16_t*) calloc(SAMPLES, sizeof(int16_t));

        int             x;
        int             ttl = 0;

        // Seed the pseudo-random number generator
        srand(1);

        // Fill the array with random data
        for (x = 0; x < SAMPLES; x++) {
                data[x] = (rand()%65536)-32768;
        }

        // ######################################
        // This is the interesting part!
        // Scale the volume of all of the samples
        for (x = 0; x < SAMPLES; x++) {
                //need to add 32768 so we don't get a negative index
                data[x] = lookupTable[(data[x]+32768)];
        }

Overall, it was weird. I expected it to be faster, I even gave it an edge by doubling the number of samples to 10 million instead of the original 5 million.


A perf report of vol2 shows 0.05% of the time was spent scaling the sample.



The original for reference with 10 million samples


A perf report of vol1 is also shown below



No comments:

Post a Comment

Contains Duplicate (Leetcode)

I wrote a post  roughly 2/3 years ago regarding data structures and algorithms. I thought I'd follow up with some questions I'd come...