<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>The ILNumerics Blog &#187; c++</title>
	<atom:link href="https://ilnumerics.net/blog/tag/c/feed/" rel="self" type="application/rss+xml" />
	<link>https://ilnumerics.net/blog</link>
	<description>The Productivity Machine  &#124;  A fresh attempt for scientific computing  &#124;  http://ilnumerics.net</description>
	<lastBuildDate>Thu, 05 Dec 2024 09:09:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.41</generator>
	<item>
		<title>ILNumerics Language Features: Limitations for C#, Part II: Compound operators and ILArray</title>
		<link>https://ilnumerics.net/blog/ilnumerics-language-features-limitations-for-c-part-ii-compound-operators-and-ilarray/</link>
		<comments>https://ilnumerics.net/blog/ilnumerics-language-features-limitations-for-c-part-ii-compound-operators-and-ilarray/#comments</comments>
		<pubDate>Sun, 29 Dec 2013 14:17:30 +0000</pubDate>
		<dc:creator><![CDATA[haymo]]></dc:creator>
				<category><![CDATA[C#]]></category>
		<category><![CDATA[Usage]]></category>
		<category><![CDATA[array]]></category>
		<category><![CDATA[c++]]></category>
		<category><![CDATA[compound operator]]></category>
		<category><![CDATA[csharp]]></category>
		<category><![CDATA[ILNumerics]]></category>
		<category><![CDATA[language]]></category>
		<category><![CDATA[Memory Management]]></category>
		<category><![CDATA[syntax]]></category>
		<category><![CDATA[usage]]></category>

		<guid isPermaLink="false">http://ilnumerics.net/blog/?p=517</guid>
		<description><![CDATA[<p>A while ago I blogged about why the CSharp var keyword cannot be used with local ILNumerics arrays (ILArray&#60;T&#62;, ILCell, ILLogical). This post is about the other one of the two main limitations on C# language features in ILNumerics: the use of compound operators in conjunction with ILArray&#60;T&#62;. In the online documentation we state the &#8230; <a href="https://ilnumerics.net/blog/ilnumerics-language-features-limitations-for-c-part-ii-compound-operators-and-ilarray/" class="more-link">Continue reading <span class="screen-reader-text">ILNumerics Language Features: Limitations for C#, Part II: Compound operators and ILArray</span> <span class="meta-nav">&#8594;</span></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/ilnumerics-language-features-limitations-for-c-part-ii-compound-operators-and-ilarray/">ILNumerics Language Features: Limitations for C#, Part II: Compound operators and ILArray</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>A while ago I blogged about <a title="Why the ‘var’ keyword is not allowed in ILNumerics" href="http://ilnumerics.net/blog/why-the-var-keyword-is-not-allowed-in-ilnumerics/">why the CSharp <strong>var</strong> keyword cannot be used with local ILNumerics arrays</a> (ILArray&lt;T&gt;, ILCell, ILLogical). This post is about the other one of the two main limitations on C# language features in ILNumerics: the use of compound operators in conjunction with ILArray&lt;T&gt;. In the <a href="http://ilnumerics.net/GeneralRules.html">online documentation</a> we state the rule as follows:</p>
<blockquote><p>The following features of the C# language are not compatible with the memory management of ILNumerics and its use is <em>not supported:</em></p>
<ul>
<li>The C# var keyword in conjunction with any ILNumerics array types, and</li>
<li>Any compound operator, like +=, -=, /=, *= a.s.o. Exactly spoken, these operators are not allowed in conjunction with the indexer on arrays. So A += 1; is allowed. A[0] += 1; is not!</li>
</ul>
</blockquote>
<p>Let&#8217;s take a closer look at the second rule. Most developers think of compound operators as being just syntactic sugar for some common expressions:</p>
<pre class="brush: csharp; title: ; notranslate">int i = 1;
i += 2;</pre>
<p>&#8230; would simply expand to:</p>
<pre class="brush: csharp; title: ; notranslate">int i = 1;
i  = i + 2; </pre>
<p>For such simple types like an integer variable the actual effect will be indistinguishable from that expectation. However, compound operators introduce a lot more than that. Back in his times at Microsoft, <a href="http://blogs.msdn.com/b/ericlippert/archive/2011/03/29/compound-assignment-part-one.aspx">Eric Lippert blogged about those subtleties</a>. The article is worth reading for a deep understanding of all side effects. In the following, we will focus on the single fact, which becomes important in conjunction with ILNumerics arrays: when used with a compound operator, <code>i</code> in the example above is only evaluated once! In difference to that, in <code>i = i + 2</code>, <code>i</code> is evaluated twice.</p>
<p>Evaluating an <code>int</code> does not cause any side effects. However, if used on more complex types, the evaluation may does cause side effects. An expression like the following:</p>
<pre class="brush: csharp; title: ; notranslate">ILArray&lt;double&gt; A = 1;
A += 2;</pre>
<p>&#8230; evaluates to something similiar to this:</p>
<pre class="brush: csharp; title: ; notranslate">ILArray&lt;double&gt; A = 1;
A = (ILArray&lt;double&gt;)(A + 2); </pre>
<p>There is nothing wrong with that! A += 2 will work as expected. Problems arise, if we include indexers on A:</p>
<pre class="brush: csharp; title: ; notranslate">ILArray&lt;double&gt; A = ILMath.rand(1,10);
A[0] += 2;
// this transforms to something similar to the following: 
var receiver = A; 
var index = (ILRetArray&lt;double&gt;)0;
receiver[index] = receiver[index] + 2; </pre>
<p>In order to understand what exactly is going on here, we need to take a look at the definition of indexers on <code>ILArray</code>: </p>
<pre class="brush: csharp; title: ; notranslate">public ILRetArray&lt;ElementType&gt; this[params ILBaseArray[] range] { ... </pre>
<p>The indexer expects a variable length array of <code>ILBaseArray</code>. This gives most flexibility for defining subarrays in ILNumerics. Indexers allow not only scalars of builtin system types as in our example, but arbitrary ILArray and string definitions. In the expression <code>A[0]</code>, <code>0</code> is implicitly converted to a scalar ILNumerics array before the indexer is invoked. Thus, a temporary array is created as argument. Keep in mind, due to the memory management of ILNumerics, all such implicitly created temporary arrays are immediately disposed off after the first use.</p>
<p>Since both, the indexing expression <code>0</code> and the object where the indexer is defined for (i.e.: A) are evaluated only once, we run into a problem: <code>index</code> is needed twice. At first, it is used to acquire the subarray at <code>receiver[index]</code>. The indexer <code>get { ...} </code> function is used for that. Once it returns, all input arguments are disposed &#8211; an important foundation of ILNumerics memory efficency! Therefore, if we invoke the index setter function with the same <code>index</code> variable, it will find the array being disposed already &#8211; and throws an exception.</p>
<p>It would certainly be possible to circumvent that behavior by converting scalar system types to <code>ILArray</code> instead of <code>ILRetArray</code>:</p>
<pre class="brush: csharp; title: ; notranslate">ILArray A = ...;
A[(ILArray)0] += 2;</pre>
<p>However, the much less expressive syntax aside, this would not solve our problem in general either. The reason lies in the flexibility required for the indexer arguments. The user must manually ensure, all arguments in the indexer argument list are of some non-volatile array type. Casting to <code>ILArray&lt;T&gt;</code> might be an option in some situations. However, in general, compound operators require much more attention due to the efficient memory management in ILNumerics. We considered the risk of failing to provide only non-volatile arguments too high. So we decided not to support compound operators at all. </p>
<p>See: <a href="http://ilnumerics.net/$GeneralRules.html">General Rules</a> for ILNumerics, <a href="http://ilnumerics.net/$FunctionRules.html">Function Rules</a>, <a href="http://ilnumerics.net/Subarray0.html">Subarrays</a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/ilnumerics-language-features-limitations-for-c-part-ii-compound-operators-and-ilarray/">ILNumerics Language Features: Limitations for C#, Part II: Compound operators and ILArray</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://ilnumerics.net/blog/ilnumerics-language-features-limitations-for-c-part-ii-compound-operators-and-ilarray/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>High Performance Fast Fourier Transformation in .NET</title>
		<link>https://ilnumerics.net/blog/high-performance-fast-fourier-transformation-in-net/</link>
		<comments>https://ilnumerics.net/blog/high-performance-fast-fourier-transformation-in-net/#comments</comments>
		<pubDate>Sat, 24 Aug 2013 12:24:55 +0000</pubDate>
		<dc:creator><![CDATA[Jonas]]></dc:creator>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Features]]></category>
		<category><![CDATA[ILNumerics]]></category>
		<category><![CDATA[Numerical Algorithms]]></category>
		<category><![CDATA[c++]]></category>
		<category><![CDATA[Fast Fourier Transform]]></category>
		<category><![CDATA[FFT]]></category>
		<category><![CDATA[High Performance]]></category>
		<category><![CDATA[Math Library]]></category>
		<category><![CDATA[Numerical Algorithm]]></category>

		<guid isPermaLink="false">http://ilnumerics.net/blog/?p=453</guid>
		<description><![CDATA[<p>„I started using ILNumerics for the FFT routines. The quality and speed are excellent in a .NET environment.“ The Fourier Transform (named after French mathematician and physicist Joseph Fourier) allows scientists to transform signals between time domain and frequency domain. This way, an arbitrary periodic function can be expressed as a sum of cosine terms. Think &#8230; <a href="https://ilnumerics.net/blog/high-performance-fast-fourier-transformation-in-net/" class="more-link">Continue reading <span class="screen-reader-text">High Performance Fast Fourier Transformation in .NET</span> <span class="meta-nav">&#8594;</span></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/high-performance-fast-fourier-transformation-in-net/">High Performance Fast Fourier Transformation in .NET</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p style="text-align: right;"><em>„I started using ILNumerics for the FFT routines. </em><em>The quality and speed are excellent in a .NET environment.“</em></p>
<p style="text-align: left;" align="right">The Fourier Transform (named after French mathematician and physicist Joseph Fourier) allows scientists to transform signals between time domain and frequency domain. This way, an arbitrary periodic function can be expressed as a sum of cosine terms. Think of the equalizer of your mp3-player: It expresses your music’s signal in terms of the frequencies it is composed of.</p>
<p>The <a href="http://ilnumerics.net/FFTMain.html">Fast Fourier Transform (FFT)</a> is an algorithm for the rapid computation of discrete Fourier Transforms’ values. Being one of the most popular numerical algorithms, it is used in physics, engineering, math and many other domains.</p>
<p>In terms of software engineering, the Fast Fourier Transform is a very demanding algorithm: In the .NET-framework, a naive approach would cause very low execution speeds. That’s the reason why many .NET-developers have to implement native C-libraries when it comes to <a href="http://ilnumerics.net/FFTMain.html">FFTs</a>.</p>
<p>ILNumerics uses Intel&#8217;s® MKL for Fast Fourier Transforms: That&#8217;s why our users don’t have to implement native library&#8217;s themselves for high performance FFTs. No matter if they have a scientific or an industrial background, many developers rely on ILNumerics because of its implementation of the Fast Fourier Transform. It’s the fastest you can get today – even for big amounts of data.</p>
<p>ILNumerics provides interfaces to forward and backward Fourier Transformations, for real and complex floating point data, in single and double precision, in one, two or n dimensions. In addition to the MKL&#8217;s FFTs, prepared interfaces for FFTW and for AMDs ACML exist.</p>
<p>Learn more about the ILNumerics library and its implementation of <a href="http://ilnumerics.net/FFTMain.html">Fast Fourier Transformation in C#/.NET</a> in the online documentation!</p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/high-performance-fast-fourier-transformation-in-net/">High Performance Fast Fourier Transformation in .NET</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://ilnumerics.net/blog/high-performance-fast-fourier-transformation-in-net/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Fast. Faster …. Performance Comparison: C# (ILNumerics), FORTRAN, MATLAB and numpy – Part II</title>
		<link>https://ilnumerics.net/blog/fast-faster-performance-comparison-c-ilnumerics-fortran-matlab-and-numpy-part-ii/</link>
		<comments>https://ilnumerics.net/blog/fast-faster-performance-comparison-c-ilnumerics-fortran-matlab-and-numpy-part-ii/#comments</comments>
		<pubDate>Mon, 06 Feb 2012 16:51:38 +0000</pubDate>
		<dc:creator><![CDATA[haymo]]></dc:creator>
				<category><![CDATA[Comparison]]></category>
		<category><![CDATA[c++]]></category>
		<category><![CDATA[fortran]]></category>
		<category><![CDATA[ILNumerics]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[plots]]></category>
		<category><![CDATA[test setup]]></category>

		<guid isPermaLink="false">http://ilnumerics.net/blog/?p=52</guid>
		<description><![CDATA[<p>In the first part of my somehow lengthy comparison between Fortran, ILNumerics, Matlab and numpy, I gave some categorization insight into terms related to &#8216;performance&#8217; and &#8216;language&#8217;. This part explains the setup and hopefully the results will fit in here as well (otherwise we&#8217;ll need a third part ) Prerequisites This comparison is going to &#8230; <a href="https://ilnumerics.net/blog/fast-faster-performance-comparison-c-ilnumerics-fortran-matlab-and-numpy-part-ii/" class="more-link">Continue reading <span class="screen-reader-text">Fast. Faster …. Performance Comparison: C# (ILNumerics), FORTRAN, MATLAB and numpy – Part II</span> <span class="meta-nav">&#8594;</span></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/fast-faster-performance-comparison-c-ilnumerics-fortran-matlab-and-numpy-part-ii/">Fast. Faster …. Performance Comparison: C# (ILNumerics), FORTRAN, MATLAB and numpy – Part II</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In the <a href="http://ilnumerics.net/blog/fast-faster-performance-comparison-c-ilnumerics-fortran-matlab-and-numpy-part-i/">first part</a> of my somehow lengthy comparison between Fortran, ILNumerics, Matlab and numpy, I gave some categorization insight into terms related to &#8216;performance&#8217; and &#8216;language&#8217;. This part explains the setup and hopefully the results will fit in here as well (otherwise we&#8217;ll need a third part <img src="https://ilnumerics.net/blog/wp-includes/images/smilies/icon_neutral.gif" alt=":|" class="wp-smiley" /> )</p>
<h2>Prerequisites</h2>
<p>This comparison is going to be easy and fair! This means, we will not attempt to compare an apple with the same apple, wrapped in a paper bag (like often done with the MKL) nor are we going to use specific features of an individual language/ framework &#8211; just to outperform another framework (like using datastructures which are better handled in a OOP language, lets say complicated graph structures or so).</p>
<p>We rather seek for an algorithm of:</p>
<ul>
<li>Sufficient size and complexity. A simple binary function like BLAS: DAXPY is not sufficient here, since it would neglect the impact of the memory management &#8211; a very important factor in .NET.</li>
<li>Limited size and complexity in order to be able to implement the algorithm on all frameworks compared (in a reasonable time).</li>
</ul>
<p>We did chose the <a title="wikipedia: kmeans" href="http://en.wikipedia.org/wiki/Kmeans" target="_blank">kmeans </a>algorithm. It only uses common array syntax, no calls to linear algebra routines (which are usually implemented in Intel&#8217;s MKL among all frameworks) and &#8211; used on reasonable data sizes &#8211; comes with sufficient complexity, computational and memory demands. That way we can measure the true performance of the framework, its array implementation and the feasibility of the mathematical syntax.</p>
<p>Two versions of kmeans were implemented for every framework: A &#8216;natural&#8217; version which is able to be translated into all languages with minimal differences according execution cost. A second version allows for obvious optimizations to get applied to the code according the language recommendations where applicable. All &#8216;more clever&#8217; optimizations are left to the corresponding compiler/interpreter.</p>
<p>The test setup: Acer TravelMate 8472TG, Intel Core™ i5-450M processor 2.4GHz, 3MB L3 cache, 4GB DDR3 RAM, Windows 7/64Bit. All tests were targeting the x86 platform.</p>
<h2>ILNumerics Code</h2>
<p>The printout of the ILNumerics variant for the kmeans algorithm is shown. The code demonstrates only obligatory features for memory management: Function and loop scoping and specific typed function parameters. For clarity, the function parameter checks and loop parameter initialization parts have been abbreviated.</p>
<pre class="brush: csharp; title: ; notranslate">public static ILRetArray&lt;double&gt; kMeansClust (ILInArray&lt;double&gt; X, 
                                             ILInArray&lt;double&gt; k,
                                             int maxIterations, 
                                             bool centerInitRandom,
                                             ILOutArray&lt;double&gt; outCenters) {
   using (ILScope.Enter(X, k)) {

// … (abbreviated: parameter checking, center initializiation)

while (maxIterations --&gt; 0) {
    for (int i = 0; i &lt; n; i++) {
        using (ILScope.Enter()) {
            ILArray&lt;double&gt; minDistIdx = empty(); 
            min(sum(abs(centers - X[full,i])), minDistIdx,1).Dispose();// **
            classes[i] = minDistIdx[0]; 
        }
    }
    for (int i = 0; i &lt; iK; i++) {
        using (EnterScope()) {
            ILArray&lt;double&gt; inClass = X[full,find(classes == i)]; 
            if (inClass.IsEmpty) {
                centers[full,i] = double.NaN;
            } else {
                centers[full,i] = mean(inClass,1);
            }
        }
    }
    if (allall(oldCenters == centers)) break; 
    oldCenters.a = centers.C; 
}
if (!object.Equals(outCenters, null))
    outCenters.a = centers; 
return classes; 
   }

}</pre>
<p>The algorithm iteratively assigns data points to cluster centres and recalculates the centres according to its members afterwards. The first step needs n * m * k * 3 ops, hence its effort is O(nmk). The second step only costs O(kn + mn), hence the first loop clearly dominates the algorithm. A version better taking into account available ILNumerics features, would replace the line marked with ** by the following line: </p>
<pre class="brush: csharp; title: ; notranslate">...
min(distL1(centers, X[full, i]), minDistIdx, 1).Dispose();
...</pre>
<p>The distL1 function basically removes the need for multiple iterations over the same distance array by condensing the element subtraction, the calculation of the absolute values and its summation into one step for every centre point.</p>
<h2>Matlab® Code</h2>
<p>For the Matlab implementation the following code was used. Note, the existing kmeans algorithm in the stats toolbox has not been utilized, because it significantly deviates from our simple algorithm variant by more configuration options and inner functioning.</p>
<pre class="brush: matlabkey; title: ; notranslate">
function [centers, classes] = kmeansclust (X, k, maxIterations, 
  centerInitRandom)

% .. (parameter checking and initialization abbreviated)
 
while (maxIterations &gt; 0)
        maxIterations = maxIterations - 1; 
        for i = 1:n
            dist = centers - repmat(X(:,i),1,k); % ***
            [~, minDistIdx] = min(sum(abs(dist)),[], 2);           
            classes(i) = minDistIdx(1); 
        end
        for i = 1:k
            inClass = X(:,classes == i); 
            if (isempty(inClass))
                centers(:,i) = nan;
            else
                centers(:,i) = mean(inClass,2);
                inClassDiff = inClass - repmat(centers(:,i),1,size(inClass,2)); 
            end
        end
        if (all(all(oldCenters == centers))) 
            break;
        end
        oldCenters = centers; 
 end
</pre>
<p>Again, a version better matching the performance recommendations for the language would prevent the repmat operation and reuse the single column of X for all centres in order to calculate the difference between the centres and the current data point.</p>
<pre class="brush: matlabkey; title: ; notranslate">...
dist = bsxfun(@minus,centers,X(:,i));
...</pre>
<h2>FORTRAN Code</h2>
<p>In order to match our algorithm most closely, the first FORTRAN implementation simulates the optimized <code>bsxfun</code> variant of Matlab and the common vector expansion in ILNumerics accordingly. The array of distances between the cluster centres and the current data point is pre-calculated for each iteration of i: </p>
<pre class="brush: xml; title: ; notranslate">subroutine SKMEANS(X,M,N,IT,K,classes) 
!USE KERNEL32
  !DEC$ ATTRIBUTES DLLEXPORT::SKMEANS 

  ! DUMMIES
  INTEGER :: M,N,K,IT
  DOUBLE PRECISION, INTENT(IN) :: X(M,N)
  DOUBLE PRECISION, INTENT(OUT) :: classes(N)
  ! LOCALS 
  DOUBLE PRECISION,ALLOCATABLE :: centers(:,:) &amp;
                   ,oldCenters(:,:) &amp;
                   ,distances(:) &amp; 
                   ,tmpCenter(:) &amp; 
                   ,distArr(:,:)
  DOUBLE PRECISION nan
  INTEGER S, tmpArr(1)
  
  nan = 0
  nan = nan / nan
  
  ALLOCATE(centers(M,K),oldCenters(M,K),distances(K),tmpCenter(M),distArr(M,K))

  centers = X(:,1:K)  ! init centers: first K data points
  do  
    do i = 1, N       ! for every sample...
        do j = 1, K   ! ... find its nearest cluster
            distArr(:,j) = X(:,i) - centers(:,j)         ! **
        end do
        distances(1:K) = sum(abs(distArr(1:M,1:K)),1)    
        tmpArr = minloc ( distances(1:K) )
        classes(i) = tmpArr(1);
    end do
  
    do j = 1,K ! for every cluster 
        tmpCenter = 0; 
        S = 0; 
        do i = 1,N ! compute mean of all samples assigned to it
            if (classes(i) == j) then
                tmpCenter = tmpCenter + X(1:M,i); 
                S = S + 1; 
            end if     
        end do
        if (S &amp;gt; 0) then 
            centers(1:M,j) = tmpCenter / S; 
        else 
            centers(1:M,j) = nan;
        end if 
    end do
    
    if (IT .LE. 0) then ! exit condition
        exit; 
    end if 
    IT = IT - 1; 
    if (sum(sum(centers - oldCenters,2),1) == 0) then
        exit;  
    end if 
    oldCenters = centers; 
  end do
  DEALLOCATE(centers, oldCenters,distances,tmpCenter);
end subroutine SKMEANS</pre>
<p>Another version of the first step was implemented which utilizes the memory accesses more efficiently. Its formulation relatively closely matches the ‘optimized’ version of ILNumerics: </p>
<pre class="brush: xml; title: ; notranslate">
    ...
    do i = 1, N          ! for every sample...
        do j = 1, K      ! ... find its nearest cluster
            distances(j) = sum(                                &amp;
                                abs(                           &amp;
                                    X(1:M,i) - centers(1:M,j)))
        end do

        tmpArr = minloc ( distances(1:K) )
        classes(i) = tmpArr(1);
    
    end do
    ...
</pre>
<h2>numpy Code</h2>
<p>The general variant of the kmeans algorithm in numpy is as follows:</p>
<pre class="brush: python; title: ; notranslate"> 
from numpy import *

def kmeans(X,k):
    n = size(X,1)
    maxit = 20
    centers = X[:,0:k].copy()
    classes = zeros((1.,n))
    oldCenters = centers.copy()
    for it in range(maxit):
        for i in range(n):
            dist = sum(abs(centers - X[:,i,newaxis]), axis=0)
            classes[0,i] = dist.argmin()
            
        for i in range(k):
            inClass = X[:,nonzero(classes == i)[1]]
            if inClass.size == 0:
                centers[:,i] = np.nan
            else:
                centers[:,i] = inClass.mean(axis=1)

        if all(oldCenters == centers):
            break
        else:
            oldCenters = centers.copy()
</pre>
<p>Since this framework showed the slowest execution speed of all implemented frameworks in the comparison (and due to my limited knowledge of numpys optimization recommendations) no improved version was sought.</p>
<h2>Parameters</h2>
<p>The 7 algorithms described above were all tested against the same data set of corresponding size. Test data were evenly distributed random mumbers, generated on the fly and reused for all implementations. The problem sizes m and n and the number of clusters k are varied according the following table: </p>
<pre>
	min value 	max value	fixed parameters
m	50		2000		n = 2000, k = 350
n	400		3000		m = 500, k = 350
k	10		1000		m = 500, n = 2000
</pre>
<p>By varying one value, the other variables were fixed respectively.<br />
The results produced by all implementations were checked for identity. Each test was repeated 10 times (5 times for larger datasets) and the average of execution times were taken as test result. Minimum and maximum execution times were tracked as well.  </p>
<h2>Results</h2>
<p>Ok, I think we make it to the results in this part! <img src="https://ilnumerics.net/blog/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> The plots first! The runtime measures are shown in the next three figures as error bar plots:<br />
<figure id="attachment_68" style="width: 688px;" class="wp-caption aligncenter"><a href="http://ilnumerics.net/blog/wp-content/uploads/2012/02/plotall4k.png"><img src="http://ilnumerics.net/blog/wp-content/uploads/2012/02/plotall4k-1024x768.png" alt="" width="688" height="516" class="size-large wp-image-68" /></a><figcaption class="wp-caption-text">Execution speed comparison for variying k</figcaption></figure><br />
<figure id="attachment_67" style="width: 688px;" class="wp-caption aligncenter"><a href="http://ilnumerics.net/blog/wp-content/uploads/2012/02/plotall4m.png"><img src="http://ilnumerics.net/blog/wp-content/uploads/2012/02/plotall4m-1024x768.png" alt="" width="688" height="516" class="size-large wp-image-67" /></a><figcaption class="wp-caption-text">Execution speed comparison for variying m</figcaption></figure><br />
<figure id="attachment_66" style="width: 688px;" class="wp-caption aligncenter"><a href="http://ilnumerics.net/blog/wp-content/uploads/2012/02/plotall4n.png"><img src="http://ilnumerics.net/blog/wp-content/uploads/2012/02/plotall4n-1024x768.png" alt="" width="688" height="516" class="size-large wp-image-66" /></a><figcaption class="wp-caption-text">Execution speed comparison for variying n</figcaption></figure><br />
 Clearly, the numpy framework showed the worst performance – at least we did not implement any optimization for this platform. MATLAB, as expected, shows similar (long) execution times. In the case of the unoptimized algorithms, the ILNumerics implementation is able to almost catch up with the execution speed of FORTRAN. Here, the .NET implementation needs less than twice the time of the first, naïve FORTRAN algorithm. The influence of the size of n is negligible, since the most ‘work’ of the algorithm is done by calculating the distances of one data point to all cluster centres. Therefore, only the dimensionality of the data and the number of clusters are important here.  </p>
<h2>Conclusion</h2>
<p>For the optimized versions of kmeans (the stippled lines in the figures) – especially for middle sized and larger data sets (k &gt; 200 clusters or m &gt; 400 dimensions) – the ILNumerics implementation runs at the same speed as the FORTRAN one. This is due to ILNumerics implementing similar optimizations into its builtin functions as the FORTRAN compiler does. Also, the efforts of circumventing around the GC by the ILNumerics memory management and preventing from bound checks in inner loops pay off here. However, there is still potential for future speed-up, since SSE extensions are (yet) not utilized.<br />
For smaller data sets, the overhead of repeated creation of ILNumerics arrays becomes more important, which will be another target for future enhancements. Clearly visible from the plots is the high importance of the choice of algorithm. By reformulating the inner working loop, a significant improvement has been achieved for all frameworks. </p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/fast-faster-performance-comparison-c-ilnumerics-fortran-matlab-and-numpy-part-ii/">Fast. Faster …. Performance Comparison: C# (ILNumerics), FORTRAN, MATLAB and numpy – Part II</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://ilnumerics.net/blog/fast-faster-performance-comparison-c-ilnumerics-fortran-matlab-and-numpy-part-ii/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Comparison .NET JAVA C++ and FORTRAN</title>
		<link>https://ilnumerics.net/blog/comparison-net-java-c-and-fortran/</link>
		<comments>https://ilnumerics.net/blog/comparison-net-java-c-and-fortran/#comments</comments>
		<pubDate>Fri, 13 Jan 2012 14:35:04 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[Comparison]]></category>
		<category><![CDATA[c++]]></category>
		<category><![CDATA[comparison]]></category>
		<category><![CDATA[fortran]]></category>
		<category><![CDATA[java]]></category>
		<category><![CDATA[jobs]]></category>
		<category><![CDATA[popularity]]></category>

		<guid isPermaLink="false">http://ilnumerics.net/wblog/?p=36</guid>
		<description><![CDATA[<p>I recently stumbled on a plot provided by indeed.com: .NET, Java, C++, Fortan Job Trends NET jobs &#8211; Java jobs &#8211; C++ jobs &#8211; Fortan jobs I found it interesting that .NET even seems to be leading in the past and only recently Java had catched up with .NET. Somehow, I always thought it was &#8230; <a href="https://ilnumerics.net/blog/comparison-net-java-c-and-fortran/" class="more-link">Continue reading <span class="screen-reader-text">Comparison .NET JAVA C++ and FORTRAN</span> <span class="meta-nav">&#8594;</span></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/comparison-net-java-c-and-fortran/">Comparison .NET JAVA C++ and FORTRAN</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>I recently stumbled on a plot provided by indeed.com: </p>
<div style="width:540px">
<a href="http://www.indeed.com/jobtrends?q=.NET%2C+Java%2C+C%2B%2B%2C+Fortan"><img src="http://www.indeed.com/trendgraph/jobgraph.png?q=.NET%2C+Java%2C+C%2B%2B%2C+Fortan"/></a></p>
<table width="100%" cellpadding="6" cellspacing="0" border="0" style="font-size:80%">
<tr>
<td><a href="http://www.indeed.com/jobtrends?q=.NET%2C+Java%2C+C%2B%2B%2C+Fortan">.NET, Java, C++, Fortan Job Trends</a></td>
<td align="right"><a href="http://www.indeed.com/jobs?q=.NET"> NET jobs</a> &#8211; <a href="http://www.indeed.com/jobs?q=Java"> Java jobs</a> &#8211; <a href="http://www.indeed.com/jobs?q=C%2B%2B">C++ jobs</a> &#8211; <a href="http://www.indeed.com/jobs?q=Fortan">Fortan jobs</a></td>
</tr>
</table>
</div>
<p>I found it interesting that .NET even seems to be leading in the past and only recently Java had catched up with .NET. Somehow, I always thought it was the other way around. Also, Fortran does not seem to be visible at all. Most likely, because you wouldn&#8217;t enter any position because &#8220;you know Fortran&#8221;. Rather Fortran is somehow a requirement which you would need to learn in case your position requires it, I suppose. </p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/comparison-net-java-c-and-fortran/">Comparison .NET JAVA C++ and FORTRAN</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://ilnumerics.net/blog/comparison-net-java-c-and-fortran/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
