Skip to main content

Algorithim Selection (Lab 6)

When attempting to alter large amounts of data by a single value, the approach taken can have a significant impact on performance. In this lab example, we had the task of simulating sound and reducing the volume to 75% of the original. Limited by the variable type of int16_t means that the range must be between -32,768 to +32,767. To simulate sound the random function was used albeit with a fixed seed to predict the same results.

The basic program had to work something like this:

//make a 500 million digit array with random int16_t numbers
//scale the values by 0.75 and see how long it takes
//sum the result and display it




Part 1

This part involves simply multiplying each value in the array by 0.75 and placing it back in the array. The code I made for that is here. This approach took the most amount of time, coming in at approximately 8 seconds.

Part 2

This part was completed using a precalculated array of values scaled to 0.75 volume. The calculations during the time step were much easier as the sound array would pass it's current value + 32768 (because we can't have a negative sound) and see what that value is in the lookup table. The code for this step is here. This calculation was faster, coming in at approximately 6 seconds.

Part 3

The final part was made to demonstrate the speed of bit shifting values. The 0.75 value was first multiplied by 256 to make it a fixed point integer. This was followed by the entire value (i.e. array value * new volume) being bit shifted by 8. The code for this final part can be found here. This was by far the fastest, as the calculation took approximately 3 seconds to complete.

These run times have been captured live and are available here for future reference. There are some important questions that help us analyze what we have just made.

1) What is the impact of various optimization levels on the software performance?

 If we do a severe optimization (O3) we notice the time is cut in half or better for all parts, a huge improvement. This can be seen here.

2) Does the distribution of data matter?

The distribution of data does matter, this is proven by part 2 being faster than part 1. If we split the new values into another array it is faster to take values from another array then perform a math calculation.

3) If samples are fed at CD rate (44100 samples per second x 2 channels), can both algorithms keep up?

All 3 algorithms can easily keep up. How can we determine this? At the highest rate we are looking at 88200 samples a second. If the slowest algorithm can process 500 million numbers in less than 10 seconds they all meet this requirement. 

4) What is the memory footprint of each approach?

The beautiful thing with the calloc() function is these algorithms cannot use any more memory than they need. The memory footprint is approximately bound to the size and type of the array(s). The sound array calculation is 16 bits * 500000000 numbers = 8000000000 bits = 1000000000 bytes = 1000mb.

5) What is the performance of each approach?

The second approach is slightly faster than the first approach, and the third approach is drastically faster than both.

6) What is the energy consumption of each approach? (What information do you need to calculate this?)

To calculate energy consumption, the architecture (ARM), the time taken, and processor power would have to be considered. The longer the run time however will likely drain more power as the CPU does not tend to sit idly if possible.

7) Aarchie and Betty (Xerxes) have different performance profiles, so it's not reasonable to compare performance between the machines, but it is reasonable to compare the relative performance of the two algorithms in each context. Do you get similar results?

Both results seemed to be relatively similar, this is difficult to test however because load on each machine varies and causes discrepancies. 

8) What other optimizations can be applied to this problem?

 To test the limit on these algorithms they could be written in assembly. This has already been proven to yield the fastest times.


Comments

Popular posts from this blog

Final Project Part 01 - Final Summary

To end part 1 I will summarize what information I have gathered for part 2: I am optimizing clib, a package manager. After benchmarking, it is clear there is a significant time delay in some advanced calls of the SHA1 function, such as ones that call update many times. To optimize, I am going to add the -O3 flag and remove a loop condition (currently). Some other observations: This project is relatively small with no ./configure for other platforms. The Sha1 code is unique and does not conform to the simple sha1 algorithm such as on    Wikipedia . The documentation (i.e. README) is relatively vague at describing the dependancies. It suggests only syntax that implies installation and isn't clear at documenting development vs. published code.   I have learned alot getting to this point in part 1. Firstly, I learned that library files can only be linked by using sudo ldconfig and the files must be in usr/lib. Secondly, I learned how to alter an advanced Makefile's fla

Final Project Part 02 - Sha1 Function Enhancements

To try to squeeze out a bit more performance I attempted to some compiler optimizations. Unfortunately, due to the sheer complexity of the algorithm, I was unable to find other logic complexities to simplify. I tried some loop unrolling to make the compiler have to work a little less, some examples are here below: I made a graph to demonstrate the minute differences this makes in the test vectors below: At most a few millisecond difference is all that can be acquired, and this is only from the finalcount[] array as the digest array produces errors if not compiled in a loop along with other for loops in the code. To test this I simply altered the sha1.c code and ran the make file to see if the vectors passed or failed. As mentioned this is a compiler optimzation, in other words it is completed already, especially at the -O3 level where the benchmarking was done. I would not  recommend this change to be pushed upstream normally due to the insignificant time change

Final Project Part 02 - Final Summary

In conclusion, the -O3 flag was the most important discovery with trying to optimize clib. It offered a significant speed up with no interference, and provided the chance to uniform a many times used function, strdup. Overall the function is built extremely well with very advanced logic. Attempting to alter said logic sprouted many errors and warnings and left only simple compiler optimizations such as loop unrolling which made small differences in speed up. Clib as a whole is a great idea, offering many compartmentalized features for the C programming language that programmers could definitely find useful when developing. I hope in the future I can get more involved in writing code for open source projects such as clib whether that be doing optimization work or building from the ground up. This project not only gave me an insight on how these open source projects work but also at real world coding and improvement opportunities. I can honestly say now that I have had some experience