My thoughts:
Normally when I do a foreach loop, I would not be aware of many cores being available, and so I would write:
Code:
foreach (int i in intArray)
{
dosomething(i);
}
If "dosomething" takes a very long time, and the "intArray" is small, then it would be better to write:
Code:
Parallel.ForEach (intArray, i=>dosomething(i));
If "dosomething" takes a moderate time, and the "intArray" is very large, then it would be better to write:
Code:
int c = intArray.Length;
//divide array into 4 parts, so at least a quad core can take advantage.
//do not want it to divide too much, depends on how large array is.
//if it is really large, then divide array by 16 or more
int part1 = 0;
int part2 = (c / 4) * 1;
int part3 = (c / 4) * 2;
int part4 = (c / 4) * 3;
//this would be created in a thread, maybe using ParameterizedThreadStart
//I can update it later
//thread 1 for simplicity
for (int i = part1; i < part2; i++)
{
dosomething(i);
}
//for Simplicity :) thread 2 below would use Start method, threading code omitted
//thread 2
for ( i = part2; i < part3; i++)
{
dosomething(i);
}
//For simplicity :)
//thread 3
for ( i = part3; i < part4; i++)
{
dosomething(i);
}
//thread 4
for ( i = part4; i < c; i++)
{
dosomething(i);
}
It would be nice to add "various size - array dividing" snippets into Visual studio.
This code should run faster when it runs on a processor with many cores.
There are other parallel classes in .Net 4.5 like ConcurrentDictionary, but I think that would give you a similar performance like Parallel.ForEach where each thread on each element, I think. (It depends, I also think, you may not benefit from many threads on very small data. Well actually yes, but having fewer threads with larger data may be better. We could bench it and find out)
What are your thoughts?
What other parallel code could we use, and for what situations?