<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>The ILNumerics Blog &#187; optimization</title>
	<atom:link href="https://ilnumerics.net/blog/tag/optimization/feed/" rel="self" type="application/rss+xml" />
	<link>https://ilnumerics.net/blog</link>
	<description>The Productivity Machine  &#124;  A fresh attempt for scientific computing  &#124;  http://ilnumerics.net</description>
	<lastBuildDate>Thu, 05 Dec 2024 09:09:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.41</generator>
	<item>
		<title>ILNumerics Accelerator – A better Approach to faster Array Codes, Part II</title>
		<link>https://ilnumerics.net/blog/ilnumerics-accelerator-a-better-approach-to-faster-array-codes-part-ii/</link>
		<comments>https://ilnumerics.net/blog/ilnumerics-accelerator-a-better-approach-to-faster-array-codes-part-ii/#comments</comments>
		<pubDate>Tue, 08 Nov 2022 22:51:29 +0000</pubDate>
		<dc:creator><![CDATA[haymo]]></dc:creator>
				<category><![CDATA[Accelerator]]></category>
		<category><![CDATA[Features]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[ILNumerics]]></category>
		<category><![CDATA[Numerical Algorithms]]></category>
		<category><![CDATA[Scientific Computing]]></category>
		<category><![CDATA[accelerator]]></category>
		<category><![CDATA[hpc]]></category>
		<category><![CDATA[optimization]]></category>
		<category><![CDATA[performance]]></category>

		<guid isPermaLink="false">http://ilnumerics.net/blog/?p=1064</guid>
		<description><![CDATA[<p>An better Approach to an unsolved Problem TLDR: Authoring fast numerical array codes on .NET is a complex task. ILNumerics Computing Engine simplifies this task: it brings a convenient syntax, which is fully compatible to numpy and Matlab. And today, we bring back the free lunch: our brand new JIT compiler not only transforms numerical &#8230; <a href="https://ilnumerics.net/blog/ilnumerics-accelerator-a-better-approach-to-faster-array-codes-part-ii/" class="more-link">Continue reading <span class="screen-reader-text">ILNumerics Accelerator – A better Approach to faster Array Codes, Part II</span> <span class="meta-nav">&#8594;</span></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/ilnumerics-accelerator-a-better-approach-to-faster-array-codes-part-ii/">ILNumerics Accelerator – A better Approach to faster Array Codes, Part II</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></description>
				<content:encoded><![CDATA[<h2>An better Approach to an unsolved Problem</h2>
<p style="font-size: +1.1em;margin-bottom: 12pt;text-align: justify"><em>TLDR: Authoring fast numerical array codes on .NET is a complex task. ILNumerics Computing Engine simplifies this task: it brings a convenient syntax, which is fully compatible to numpy and Matlab. And today, we bring back <a href="http://www.gotw.ca/publications/concurrency-ddj.htm" target="_blank">the free lunch</a>: our brand new JIT compiler not only transforms numerical algorithms into highly efficient codes, it also automatically adopts to and parallelizes your workload on any hardware found at runtime. It segments and distributes even small workloads efficiently &#8211; more fine grained than any manual configuration could do. </em></p>
<p style="text-align: right">Author: H. Kutschbach (ILNumerics)  Reading time: 12 min</p>
<p>Since its introduction in 2007 ILNumerics has led innovation for technical computing in the .NET world. We have enabled a short, expressive syntax for authors of numerical algorithms. Writing array based algorithms with ILNumerics feels very similar (and is compatible) to well known prototyping systems, namely: Matlab, numpy and all its successors.</p>
<p>Another focus of ILNumerics has always been: performance. In fact, we started the company in 2013 with the goal to build the fastest technology on earth. Today we propose a better, faster approach to array code execution: <strong>ILNumerics Accelerator</strong>.</p>
<p>This is part 2 of an article series on ILNumerics Accelerator. In the <a title="ILNumerics Accelerator, Part I" href="/blog/ilnumerics-accelerator-a-better-approach-to-faster-array-codes-part-i/">first part</a> we&#8217;ve explained why automatic parallelization was too large a problem for the existing compiler landscape. In the following we start by describing the problem ILNumerics Accelerator solves. Then we&#8217;ll explain how it works.</p>
<h2>The Problem Space</h2>
<p>Math drives the world. High tech companies, financial institutions, medical instruments, rocket science &#8230; The most innovative projects do all have one thing in common: they are driven by numerical data and complex mathematical algorithms working on these data. Most data can be represented as n-dimensional arrays. They allow to implement great complexity without great effort. Consequently, array based algorithms play a huge role in todays most innovative industries.</p>
<p>Over the years we have seen many different attempts to realize the advantage of numerical arrays for small and large developer teams, working on .NET. Some large teams have even built their own solution. And indeed: building a math library in C# is easy to start with! You&#8217;ll see many low hanging fruits. With a few lines of code, you are able to create matrices and to sum them in short expressions, like &#8216;A + B&#8217;! Woooh!! If this is all you need then you can safely skip the rest of this article&#8230;</p>
<p>Often enough managers loose enthusiasm when after some months huge device codes built with such trivial &#8216;solution&#8217; eventually show execution speed far below their requirements.</p>
<p><em>Optimal execution speed is only realized by <strong>efficient </strong>utilization of <strong>all </strong>compute resources.</em></p>
<p>What may sound simple is actually a problem that has been waiting for a solution for more than two decades. Todays hardware is mostly heterogeneous. Even inside a single CPU there are at least two levels of parallelism. Often, there is also at least one GPU around.</p>
<p>The only working way to utilize heterogeneous computing resources today was to manually distribute manually selected parts of an algorithm and its data to manually selected devices. Many tools exist which aim to help the programming expert to decide, write and fine-tune the required codes at development time. This still requires expert knowledge and intimate insights into the data flow of your algorithms. And it requires time. A lot of time! The large codes created this way need testing and maintenance.</p>
<p>Nevertheless, this demanding approach still cannot harness all performance potential! Needless to say that such results are far from being expressive and easy to understand. ILNumerics Accelerator is here to solve this problem for the large domain of numerical array codes.</p>
<h3>The Big Performance Picture</h3>
<p>Let&#8217;s widen our perspective on the performance topic! Three major factors influence execution speed:</p>
<p><strong>1. The Algorithm</strong></p>
<p>Basically, a program performs a transformation of data from one representation (input) into another representation (result). The transformation is described by the program<span style="margin-left: -2pt;margin-right: -2pt">´</span>s algorithm(s). This high-level, business view on software is &#8216;lowered&#8217; by the programmer when she writes the concrete program code, enabling a compiler to understand the intent. In subsequent compilation steps the code is then further lowered by one or more compilers into other representations, which can be better understood by the hardware. So, the programmer and the compiler(s) share responsibility to implement the intended transformation in a way that it is carried out on the available hardware and produces the correct result in the shortest time possible.</p>
<p><strong>2. The Hardware </strong></p>
<p>An optimally fast program must recognize and utilize all available hardware resources. The exact configuration and properties of all devices, however, is typically only known at runtime. This implies that a significant amount of transformation can only be done at runtime (-&gt; JIT compiler). One may say, a program could be specialized for one specific set of computing resources. This is of course true, but it only shifts issues to the next time the hardware is to be renewed. Further, some properties of the hardware are inherently only known at runtime. Two examples: resource utilization rate and influences by other programs / threads running concurrently on the computer. If the final executable program fails to keep all parts of the hardware busy with useful instructions the program will require more time to run than necessary.</p>
<p><strong>3. The Data </strong></p>
<p>In general technical data is made out of individual numbers, or &#8216;scalars&#8217;. Traditionally, processor instructions deal with scalar data only. In array algorithms data forms arrays with arbitrary shape, size and dimensionality. This can be seen as an <em>extension</em> to the traditional scalar data model, since n-dimensional arrays include scalar data as a special case.</p>
<p>The array extension introduces great new complexity. Instead of a single number, having a type and a value, an array instruction deals with input data which may have one or many, often several million individual values! But arrays can also be empty&#8230;</p>
<p>While there is often exactly one way to execute a scalar instruction, to execute an array instruction <em>efficiently</em> requires careful plan and order and to select and align the execution strategy to the (greatly varying) data and hardware properties. Again, failing to do so (at runtime) will lead to sub-optimal execution times.</p>
<p><strong>Workload &amp; Segmentation</strong></p>
<p>Together with the algorithm data forms another important factor: &#8216;workload&#8217;. It is a measure of the number of computational steps required to perform a certain data transform. Let&#8217;s say: the number of clock cycles of a CPU core required to compute the result of the &#8216;sin(A)&#8217; instruction for a matrix A of a certain size. Mapping the workload onto a concrete hardware device yields the effort or cost (in terms of minimal time or energy) required to complete the transform on this device.</p>
<p>Obviously, throughout a program<span style="margin-left: -2pt;margin-right: -2pt">´</span>s execution there are many <em>intermediate steps</em>, transforming various data with varying properties (-sizes, etc.). The overall workload &#8211; in a slightly simplified view &#8211; is the aggregation of all workloads of all intermediate steps.</p>
<p>To keep a program<span style="margin-left: -2pt;margin-right: -2pt">´</span>s execution time small, all intermediate steps must be carefully adopted to and execute on the fastest hardware resource(s) currently available, yielding the lowest cost. Compare this to how a programmer today manually selects a certain device for (manually selected) large, predefined parts of a program at development time!</p>
<p><strong>Manual segmentation and device resource selection cannot bring optimal performance for all parts of a program.</strong></p>
<h2>Array Codes &#8211; done (&amp; run) right</h2>
<p>The considerations above should make it obvious that no static compiler is able to create optimal execution performance (letting toy examples aside). Data and hardware properties are subject to change! So does the optimal strategy for lowering the individual parts of a user algorithm to hardware instructions! Lowest execution times will only be achieved, if <em>all parallel potential</em> of an algorithm is identified and used to execute its instructions <em>concurrently,</em> on <em>all </em>available hardware resources, and <em>efficiently</em>.</p>
<p>But let&#8217;s skip all reasoning and jump right into the interesting part! Here is how ILNumerics Accelerator achieves exactly this goal:</p>
<ol>
<li>Array instructions within the user program are identified and merged into &#8216;<em>segments</em>&#8216;.</li>
<li>Segments limits are optimized to hold chunks of suitable workload whose size can be quickly determined at runtime.</li>
<li>Before executing a segment &#8230;
<ol start="1">
<li>The cost of the segment with the current data is computed for each hardware resource.</li>
<li>The best (i.e.: the fastest) hardware resource is selected.</li>
<li>The segments array instructions are optimized for the selected resource and current data.</li>
</ol>
</li>
<li>The kernel is scheduled on the selected resource for <em>asynchronous</em> execution and cached for later reuse.</li>
</ol>
<p>Likely, this list requires some explanation.</p>
<div style="width: 474px; " class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]-->
<video class="wp-video-shortcode" id="video-1064-1" width="474" height="267" preload="metadata" controls="controls"><source type="video/mp4" src="https://ilnumerics.net/media/andere/ILNumerics_Segments_2022.mp4?_=1" /><a href="https://ilnumerics.net/media/andere/ILNumerics_Segments_2022.mp4">https://ilnumerics.net/media/andere/ILNumerics_Segments_2022.mp4</a></video></div>
<p>[ Video does not show? Download here: <a href="/media/andere/ILNumerics_Segments_2022.mp4">https://ilnumerics.net/media/andere/ILNumerics_Segments_2022.mp4</a> ]</p>
<p>ILNumerics Accelerator takes a new approach to efficient array code execution. Instead of attempting global analysis (the top-down approach which failed in so many projects) we optimize the program from the bottom-up. Our compiler recognizes all fundamental array instructions and expressions thereof, merges them into &#8216;islands&#8217; (segments) of well-known semantics and execution cost. Each segment contains a small JIT compiler, specialized to quickly compile the segments instructions at runtime into highly optimized low-level codes – for the .NET CLR <em>and</em> for any OpenCL device! This step recognizes and adopts to all hardware and data properties found at runtime. And it maintains compatibility with all .NET platforms.</p>
<p>Currently, we support unary, binary and reduction array instructions provided by ILNumerics Computing Engine and all expressions made thereof. No changes or annotations are required to existing codes. The ILNumerics language supports all features of the Matlab and numpy languages. It will be possible to build connectors to other languages and to custom array libraries exposing similar semantics.</p>
<p>During build each array expression is automatically replaced from the user code files with a call to a specialized, auto-generated segment. Next to hosting a JIT for the array instructions a segment also &#8216;knows&#8217; the workload of its inherent computations. When at runtime the segment is called, it uses this info to compute the cost for executing the segment on each device and to identify the device suggesting earliest completion. Afterwards, it enqueues the optimized kernel to this device for computation.</p>
<p>It then &#8211; without waiting for a result &#8211; immediately passes on control to the next segment in the algorithm. Now, if subsequent segments do not depend on each other they are scheduled onto different threads or devices and execute in parallel!</p>
<p>All technical details of the new method are found in the patents and patent applications listed at the end of this article.</p>
<h3>Advantages</h3>
<p>ILNumerics Accelerator automatically executes your algorithm in the fastest possible way:</p>
<ul>
<li>During execution it finds chunks of workload which can be computed concurrently.</li>
<li>It identifies the best suited hardware device available at this time.</li>
<li>For each segment it adopts the execution strategy to the fastest device, and</li>
<li>Computes the workload of many segments in parallel.</li>
</ul>
<p>Therefore, it s<em>cales </em>execution times with the hardware <em>without recompilation or manual code adjustments.</em></p>
<p>Our compiler &#8230;</p>
<ul>
<li>Supports vector registers, multiple CPU cores, and all OpenCL devices.</li>
<li>Removes temporary arrays and manages memory between devices transparently.</li>
<li>Applies many new and important optimizations, on kernel and on segment level.</li>
<li>Fully managed solution: it targets the .NET CLR, thus is compatible with all .NET platforms.</li>
</ul>
<p>&#8230; which brings many benefits:</p>
<ul>
<li>Efficient use of all compute resources.</li>
<li>No expert effort required.</li>
<li>Shorter time to market.</li>
<li>Adjusts to new hardware.</li>
<li>Maintains maintainability.</li>
</ul>
<h3>&#8220;How much faster is it&#8221;?</h3>
<p>This is a very popular question! A comprehensive answer deserves its own article, really.</p>
<p>In short: it depends on your hardware. It depends on your algorithm. And it depends on your data! The question should rather be: how much more efficiently will it utilize my hardware? For the majority of real-world algorithms is true: as efficient as possible! Moreover, it does so <em>automatically</em>!</p>
<p>(If you are still longing for a number: on 2015-ish hardware we see speed increase rates between 3 and more than 300. Most of the time optimized code runs faster by at least a magnitude.)</p>
<h3>Where can I get it ?</h3>
<p>ILNumerics Accelerator has entered the public beta phase. It will be released with ILNumerics Ultimate VS version 7. The pre-release is available on nuget. <a title="Getting Started Guide I - Sum Examples" href="/accelerate-sum-examples.html">Start here !</a> General documentation on the new Accelerator is found <a title="ILNumerics Accelerator Compiler Documentation" href="/ilnumerics-accelerator-compiler.html">here</a>. It will subsequently be completed within the next weeks.</p>
<h3>Patents</h3>
<p>ILNumerics software is protected by international patents and patent applications. WO2018197695A1, US11144348B2, JP2020518881A, EP3443458A1, DE102017109239A1, CN110383247A, DE102011119404.9, EP22156804. ILNumerics is a registered trademark of ILNumerics GmbH, Berlin, Germany.</p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/ilnumerics-accelerator-a-better-approach-to-faster-array-codes-part-ii/">ILNumerics Accelerator – A better Approach to faster Array Codes, Part II</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://ilnumerics.net/blog/ilnumerics-accelerator-a-better-approach-to-faster-array-codes-part-ii/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="https://ilnumerics.net/media/andere/ILNumerics_Segments_2022.mp4" length="1709355" type="video/mp4" />
		</item>
		<item>
		<title>Directions to the ILNumerics Optimization Toolbox</title>
		<link>https://ilnumerics.net/blog/directions-to-the-ilnumerics-optimization-toolbox/</link>
		<comments>https://ilnumerics.net/blog/directions-to-the-ilnumerics-optimization-toolbox/#comments</comments>
		<pubDate>Wed, 14 Jan 2015 16:27:44 +0000</pubDate>
		<dc:creator><![CDATA[haymo]]></dc:creator>
				<category><![CDATA[C#]]></category>
		<category><![CDATA[ILNumerics]]></category>
		<category><![CDATA[Scientific Computing]]></category>
		<category><![CDATA[Usage]]></category>
		<category><![CDATA[Getting Started]]></category>
		<category><![CDATA[optimization]]></category>
		<category><![CDATA[Toolbox]]></category>
		<category><![CDATA[Tutorial]]></category>
		<category><![CDATA[Visual Studio]]></category>

		<guid isPermaLink="false">http://ilnumerics.net/blog/?p=738</guid>
		<description><![CDATA[<p>As of yesterday the ILNumerics Optimization Toolbox is out and online! It&#8217;s been quite a challenge to bring everything together: some of the best algorithms, the convenience you as a user of ILNumerics expect and deserve, and the high performance requirements ILNumerics sets the scale on for. We believe that all these goals could be &#8230; <a href="https://ilnumerics.net/blog/directions-to-the-ilnumerics-optimization-toolbox/" class="more-link">Continue reading <span class="screen-reader-text">Directions to the ILNumerics Optimization Toolbox</span> <span class="meta-nav">&#8594;</span></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/directions-to-the-ilnumerics-optimization-toolbox/">Directions to the ILNumerics Optimization Toolbox</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>As of yesterday the ILNumerics Optimization Toolbox is out and online! It&#8217;s been quite a challenge to bring everything together: some of the best algorithms, the convenience you as a user of ILNumerics expect and deserve, and the high performance requirements ILNumerics sets the scale on for. We believe that all these goals could be achieved quite greatly.</p>
<p><span id="more-738"></span></p>
<p>During a lengthy beta phase we received a whole bunch of precise and enormous helpful feedback from you. We really appreciate that and again would like to say thanks!</p>
<p><a href="http://ilnumerics.net/blog/wp-content/uploads/2015/01/Optim_Camel_LBFGS.png"><img class="aligncenter size-full wp-image-747" src="http://ilnumerics.net/blog/wp-content/uploads/2015/01/Optim_Camel_LBFGS.png" alt="Optim_Camel_LBFGS" width="1026" height="868" /></a>ILNumerics Optimization Toolbox adds a number of functions to ILNumerics, useful to find solutions of common optimization problems. Since everything is nicely integrated into ILNumerics it helps you solve the problem easily and very efficiently. Optimization applications like the one shown as screenshot above can now get realized in a couple of minutes!</p>
<p>This blog post sheds some light on the very first steps for using the new Optimization Toolbox. It helps you to start quickly and lists some common documentation sources.</p>
<h2>Obtaining the Optimization Toolbox</h2>
<p>The optimization toolbox is available as a dedicated package and so must be individually purchased and installed. ILNumerics 4.6 or above is needed for that to work. Existing customers can evaluate the new toolbox by installing an extended trial on top of your existing ILNumerics Ultimate VS installation. Just <a href="mailto:sales@ilnumerics.net">let us know</a> and we will lead you the way to the download. Another option is the trial: it includes all toolboxes, hence you can start right away by <a href="http://ilnumerics.net/download.html">downloading</a> and installing a trial of ILNumerics Ultimate VS and get familiar with all optimization methods easily.</p>
<h2>Optimization Toolbox Setup</h2>
<p>The optimization toolbox obviously depends on the ILNumerics Computation Engine. It installs the managed assemblies ILNumerics.Optimization.dll into the GAC and also makes them available inside Visual Studio as reference to your application projects:</p>
<p><a href="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2015-01-14-16_26_48-Reference-Manager-ConsoleApplication18.png"><img class="aligncenter size-full wp-image-740" src="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2015-01-14-16_26_48-Reference-Manager-ConsoleApplication18.png" alt="2015-01-14 16_26_48-Reference Manager - ConsoleApplication18" width="700" height="351" /></a>Afterwards, the ILNumerics.Optimization class is found in the ILNumerics namespace which commonly is included in your source files anyway. A super-mini but complete starting example in a fresh new C# console application could look as follows:</p>
<pre class="brush: csharp; title: ; notranslate">
using System;
using ILNumerics; 

namespace ConsoleApplication1 {
    class Program : ILMath {

        static ILRetArray&lt;double&gt; myObjFunc(ILInArray&lt;double&gt; A) {
            using (ILScope.Enter(A)) {
                return sum((A - 1) * A);
            }
        }
        static void Main(string[] args) {
            ILArray&lt;double&gt; M = Optimization.fmin(myObjFunc, ones(1,4));
            Console.WriteLine(M);
        }
    }
}

</pre>
<p>Note how we derived our console class from ILNumerics.ILMath! This allows us to omit the namespace.class identifier and &#8216;ILMath.sum&#8217; and &#8216;ILMath.ones&#8217; become just: &#8216;sum&#8217; and &#8216;ones&#8217;.</p>
<p>If you are lucky enough to use Visual Studio 2015, it will allow you the inclusion of the static ILNumerics.Optimization class into your &#8216;using&#8217; directives on top of your source code files also. This will inject all public functions of the Optimization class into your scope also which gives an even shorter syntax. &#8216;fmin&#8217; and all other optimization functions are now readily available right next to the common ILMath functions:</p>
<p><a href="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2015-01-14-16_44_19-ConsoleApplication1-Debugging-Microsoft-Visual-Studio.png"><img class="aligncenter size-full wp-image-743" src="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2015-01-14-16_44_19-ConsoleApplication1-Debugging-Microsoft-Visual-Studio.png" alt="2015-01-14 16_44_19-ConsoleApplication1 (Debugging) - Microsoft Visual Studio" width="661" height="373" /></a>That&#8217;s it already! Happy optimizing!</p>
<h2>Optimization Examples</h2>
<p>A number of example applications have been added to the <a href="http://ilnumerics.net/examples.php">examples section</a>. Basically, they accompany the corresponding <a href="/ilnumerics-optimization-toolbox.html">online documentation</a> and let you follow the tutorials step by step on your own machine.</p>
<p>In order to start with an example application, you download the example as a zip package and extract it into a new folder on your local machine. Open the &#8216;*.csproj&#8217; contained in the example package with Visual Studio. You will notice missing references in the References node of your project in the Solution Explorer:</p>
<p><a href="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2015-01-14-17_03_38-.png"><img class="aligncenter size-full wp-image-744" src="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2015-01-14-17_03_38-.png" alt="2015-01-14 17_03_38-" width="386" height="523" /></a>Just remove these old references and replace them with references to the files actually installed on your system – using the common method described above.</p>
<p>Hit F5 and run the example!</p>
<h2>Documentation</h2>
<p>The ILNumerics Optimization Toolbox online documentation is available here:</p>
<p><a href="http://ilnumerics.net/ilnumerics-optimization-toolbox.html">http://ilnumerics.net/ilnumerics-optimization-toolbox.html</a></p>
<p>The API /class documentation for all functions of ILNumerics.Optimization is available here:</p>
<p><a href="http://ilnumerics.net/apidoc/?topic=html/T_ILNumerics_Optimization.htm">http://ilnumerics.net/apidoc/?topic=html/T_ILNumerics_Optimization.htm</a></p>
<h2>Wrap up</h2>
<p>We gave a short introduction in how to obtain and install ILNumerics Optimization Toolbox for ILNumerics Computing Engine. We pointed you to the common places of examples and documentation.</p>
<p>If you run into trouble or have useful suggestions and feedback, <a href="/direct-support.html">let us know</a>! Contact <a href="mailto:sales@ilnumerics.net">sales@ilnumerics.net</a> for any licensing questions.</p>
<p><a href="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2014-12-19-17_06_40-Iterations-BFGS_-8-L-BFGS_-22.png"><img class="aligncenter size-full wp-image-739" src="http://ilnumerics.net/blog/wp-content/uploads/2015/01/2014-12-19-17_06_40-Iterations-BFGS_-8-L-BFGS_-22.png" alt="2014-12-19 17_06_40-Iterations BFGS_ 8 L-BFGS_ 22" width="1026" height="805" /></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/directions-to-the-ilnumerics-optimization-toolbox/">Directions to the ILNumerics Optimization Toolbox</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://ilnumerics.net/blog/directions-to-the-ilnumerics-optimization-toolbox/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>LLVM everywhere</title>
		<link>https://ilnumerics.net/blog/llvm-everywhere/</link>
		<comments>https://ilnumerics.net/blog/llvm-everywhere/#comments</comments>
		<pubDate>Sun, 04 Mar 2012 16:52:59 +0000</pubDate>
		<dc:creator><![CDATA[haymo]]></dc:creator>
				<category><![CDATA[Interesting/ useless]]></category>
		<category><![CDATA[F#]]></category>
		<category><![CDATA[library]]></category>
		<category><![CDATA[llvm]]></category>
		<category><![CDATA[OpenCL]]></category>
		<category><![CDATA[optimization]]></category>

		<guid isPermaLink="false">http://ilnumerics.net/blog/?p=146</guid>
		<description><![CDATA[<p>F#News today published some efforts to utilize the impressive power of the LLVM compiler suite from within F#. The attempts did not turn out to be mature nor stable yet &#8211; but it marks some potential of utilizing multi level compilation for runtime optimization: Use high level languages to formulate your algorithm and let lower &#8230; <a href="https://ilnumerics.net/blog/llvm-everywhere/" class="more-link">Continue reading <span class="screen-reader-text">LLVM everywhere</span> <span class="meta-nav">&#8594;</span></a></p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/llvm-everywhere/">LLVM everywhere</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>F#News today published some <a href="http://fsharpnews.blogspot.com/2012/03/using-llvm-from-f-under-windows.html">efforts</a> to utilize the impressive power of the <a href="http://llvm.org/">LLVM </a>compiler suite from within F#. The attempts did not turn out to be mature nor stable yet &#8211; but it marks some potential of utilizing multi level compilation for runtime optimization: Use high level languages to formulate your algorithm and let lower level optimizations translate your algorithm into highly efficient (platform specific) code. The attempt demonstrated in the post mentioned above still does not sufficiently hide the internals of LLVM. A truely comfortable library would offer a switch to the user only: <code>UsePlatformOptimization </code>- <code>on/off</code>. It would then be the responsibility of the library to transform the high level algorithm into valuable input of the optimizing framework. </p>
<p>LLVM is not the only interesting target for such optimization scenario. Another target is OpenCL. However, most graphic card vendors and Intel (dont know about AMD?) rely on LLVM for their OpenCL implementations already. So it appears there is no way around LLVM &#8230; </p>
<p>The post <a rel="nofollow" href="https://ilnumerics.net/blog/llvm-everywhere/">LLVM everywhere</a> appeared first on <a rel="nofollow" href="https://ilnumerics.net/blog">The ILNumerics Blog</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://ilnumerics.net/blog/llvm-everywhere/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
