Tag Archives: computing

Scientific Computing Online: IPython Notebook, Shiny (R) and ILNumerics

It seems that we’re facing a trend at the moment: scientific computing, math and visualization software for web browsers. With our interactive web examples we have taken a step into that direction, too: Visitors of our website can change the C# code of our plotting and visualization demos in order to create a new SVG, PNG, JPG or EXE output. This allows people to easily try out the ILNumerics syntax and our powerful 2d and 3d visualization features for .NET. In addition to that, ILView allows a convenient way to interactively explore scenes that are created with ILNumerics.

There are two other web applications that cause a lot of excitement in the scientific community at the moment: The IPython Notebook and Shiny, a tool for creating web applications in R. Let’s have a closer look…

IPython Notebook: “Interactive Computational Environment”

The IPython Notebook adresses the huge amount of Python users in the scientific community. It basically offers a new way for writing papers: It’s a web based editor for code execution, math, text and visualization. Because the IPython Notebook combines all parts you normally need to write a scientific paper, you won’t have to import / export different elements from several domain specific software applications: “Everything related to my analysis is located in one unified place”, explains Philip J. Guo in his blog (http://www.pgbovine.net/ipython-notebook-first-impressions.htm). Once you have finished your paper, you can share your IPython Notebook as HTML and PDF with your colleagues, your professor etc.

Shiny: “Easy web applications in R”

Shiny stands for a different approach: It allows you to implement own analysis into web applications. While IPython obviously adresses Python users, Shiny is based on R, a still very popular programming language among statisticians. What makes Shiny interesting are its interactivity features: Most demos on the Shiny website offer the opportunity to choose input parameters from text fields or drop-downs to dynamically change the output visualization. The code seems to be quite similar to R, so users who are familiar with that language will easily be able to create interactive data visualization applications for their websites using Shiny.

Disadvantages: Performance does matter

Both approaches make web browsers accesable for specific needs of scientific visualization: The IPython Notebook offers a convenient tool to share the results of analytics related research; Shiny allows R developers to publish particular interactive plots on the web.

However, both projects are limited – namely because of technological issues. The level of performance that can be realized with both platforms is restricted: You’ll face that at the latest when you start creating complex 3d scenes with either Python or R. This holds true for the platforms’ web applications, too…

Outlook: Scientific Computing online

For certain purposes web based scientific computing software offers new convenient solutions. But if you want to realize complex interactive 3d visualizations, you still won’t use any of them but an application on your local machine instead.

Our interactive web examples point the direction we want to go. In order to make scientific computing more powerful, we’re working on the next step of our approach: a full WebGL support for ILNumerics. Stay tuned…

Microsoft.Numerics, Cloud Numerics for Azure – a short Review

Today I found some time to take a look at the Cloud Numerics project at Microsoft. I started with the overview/ introduction post by Ronnie Hoogerwerf found at the Cloud Numerics blog at msdn.

The project aims at computations on very large distributed data sets and is intended for Azure. Interesting news for me: the library shows quite some similarities to ILNumerics. It provides array classes on top of native wrappers, utilizing MPI, PBLAS and ScaLAPACK. A runtime is deployed with the project binaries: Microsoft.Numerics, which provides all the classes described here.

‘Local’ Arrays in “Cloud Numerics”

The similarity is most obvious when comparing the array implementations: Both, ILNumerics and Cloud Numerics utilize multidimensional generic arrays. Cloud Numerics arrays all derive from Microsoft.Numerics.IArray<T> – not to be confused with ILNumerics local arrays ILArray<T> ;)!

Important properties of arrays in ILNumerics are provided by the concrete array implementation of an array A (A.Size.NumberOfElements, A.Size.NumberOfDimensions, A.Reshape, A.T for the Transpose a.s.o.). On the Cloud Numerics side, those properties are provided by the interface IArray<T>: A.NumberOfDimensions, A.NumberOfElements, A.Reshape(), A.Transpose() a.s.o).

A similar analogy is found in the element types supported by ILArray<T> and Microsoft.Numerics.IArray<T>. Both allow the regular System numeric value types, as System.Int32, System.Double and System.Single. Interestingly – both do not rely on System.Numerics.Complex as the main double precisioin complex data element type but rather implement their own for both: single precision and double precision.

Both array types support vector expansion, at least Cloud Numerics promises to do so in the next release. For now, only scalar binary operations are allowed for arrays. For an explanation of the feature it refers to NumPy rather than ILNumerics though.

Arrays Storage in “Cloud Numerics”

The similarities end when it comes to internal array storage. Both do store multidimensional arrays as one dimensional arrays internally. But Cloud Numerics stores its elements in native one dimensional arrays. They argue with the 2GB limit introduced for .NET objects and further elaborate:

Additionally, the underlying native array types in the “Cloud Numerics” runtime are sufficiently flexible that they can be easily wrapped in environments other than .NET (say Python or R) with very little additional effort.

It is hard follow that view. Out of my experience, .NET arrays are perfectly suitable for such interaction with native libraries, since at the end it is just a pointer to memory passed to thoses libs. Regarding the limit of 2GB: I assume a ‘problem size’ of more than 2GB would hardly be handled on one node. Especially a framework for distributed memory I would have expected to switch over to shared memory about at this limit at least?

In the consequence, interaction between Cloud Numerics and .NET arrays becomes somehow clumsy and – if it comes to really large datasets – with an expected performance hit (disclaimer: untested, of course).

Differences keep coming: indexing features are somehow basic in Cloud Numerics. By now, they support scalar element specification only and restrict the number of dimension specifier to be the same as the number of dimensions in the array. Therefore, subarrays seems to be impossible to work with. I will have an eye on it, if the project will support array features like A[full, end / 2 + 1] in one of the next releases ;)

I wonder, how the memory management is done in Cloud Numerics. The library provides overloaded operators and hence faces the same problems, which have led to the sophisticated memory management in ILNumerics: if executed in tight loops, expression like

A = 0.5 * (A + A') + 0.5 * (A - A')

on ‘large’ arrays A will inevitably lead to memory pollution if run without deterministic disposal! Not to speak about (virtual) memory fragmentation and the problems introduced by heavy unmanaged resources in conjunction with .NET objects and the GC … I could not find the time to really test it live, but I am almost sure, the targeted audience with really large problem sizes somehow contradicts this approach. Unless there is some hidden mechanism in the runtime (which I doubt, because the use of ‘var’ and regular C#, hence without the option to overload the assignment operator), this could evolve to a real nuissance IMO.

Distributed Arrays

This part seems straightforward. It follows the established scheme known from MPI but offers a nicer interface to the user. Also the Cloud Numerics Runtime wraps away the overhead of cluster management, array slicing to a good extend. However, the question of memory management again arises on the distributed side as well. Since the API exposed to the user (obviously?) does not take care of disposal of temporary arrays in a timely fashion, the performance for large arrays will most likely be suffering.

As soon as I find out more details about their internal memeory management I will post them here – hopefully together with some corrections of my assumptions.