Friday, September 27, 2013

JavaScript for Embedded - Does it make sense?

Espruino - or JavaScript for Things, as they call it - just got its kickstarter funding collected. It's a JavaScript interpreter for MCUs, to be released as an open source SW/HW. First demonstrator already exists and now they are finalizing design and documentation, and to manufacture the board in volumes.

JavaScript interpreter for Embedded is what I have been waiting for. Don't have hands-on experiences with it yet, but it sounds pretty promising. For the Espruino HW board, STM32F103 Cortex-M3 MCU with 256k Flash and 48k RAM was selected. That's way below the bare minimum requirements of Java ME Embedded, that I discussed about in my previous posting.

Now we have three competing approaches for MCU systems:

  1. Native compiled C/C++, like mbed for example
  2. Java ME Embedded
  3. JavaScript for Things, Espruino or similar
Interesting research question is comparisong of the different programming technology approaches from following perspectives (including, but not limited to):
  • Overall performance and resource consumption
  • Real-time behavior
  • Energy efficiency
  • Connectivity w/Internet and Cloud
  • Reliability and security issues 
  • Quality, including testing testing solutions, etc. 
  • Productivity of software engineering 
  • Ecosystem support
It's really fascinating to see that there is plenty of activity going on around embedded and MCU field. It's definitely not a dead zone.

Wednesday, September 25, 2013

Java for Embedded - Does it make sense?

Freescale and Oracle made a press release they will jointly push Java to IoT. This makes me interested, does it make sense to use Java in Embedded at all?

I have to admit I have always been suspicious about the usefulness of Java. Historically due to the performance reasons, and nowadays due to the fact that there exists other high-level programming languages that provide better productivity, like Python and JavaScript, just to mention a few.

In the past, Java processors executing bytecode native in hardware were expected to solve capacity and performance constraints in embedded systems. Even if number of such Java processors exists Today, that technology never really entered into the mainstream, and nowadays you seldom hear anyone talking about.

Java ME Embedded is optimized for ARM architecture. The datasheet says standard configuration requires 700KB RAM and 2000 KB ROM, and it is possible to tear it down to 130 KB RAM and 350 KB ROM. I'm not convinced. In MCU systems it is typically crucially important to have full control over time critical execution, and energy efficiency is usually an issue.

Both in super-scalar server systems and resource constrained embedded systems asynchronous programming is more efficient way of using the available resources. Strict object formalism just does not make sense. Loosening the object paradigm then ruins the fundamental idea behind Java.

For MCU systems, I'd choose something like mbed instead. And in case of CPU systems, perhaps JavaScript and Node.JS coud do the job. Actually, that is the architecture I selected for a trade fair and sales demonstrator which is under constructions. Our engineers surprised my by the productivity they gained, and they reported being happy with the technology by themselves too.

I'll discuss more about the demo in next postings.

Edit Oct. 15th:

This seems to be a hot topic in the community as well. First Dr.Dobbs published an article on Oct. 8th: If Java Is Dying, It Sure Looks Awfully Healthy
http://www.drdobbs.com/jvm/if-java-is-dying-it-sure-looks-awfully-h/240162390

Then they received tons of feedback, and wrote a response on Oct 15th: 1000 Responses to Java Is Not Dying
http://www.drdobbs.com/jvm/1000-responses-to-java-is-not-dying/240162680

Refressing to read such a discussion. There is one argument that I want to quote: "Java only appears to be dying because the cool kids prefer other languages". That's most probably true. But then back to my original topic. Terrance Bar from Oracle put's it this way:

""There are 9 Million Java developers in the world, compared to 300,000 embedded C developers. On top of that, more and more products are coming from small startups without specialised embedded knowledge. They do know Java though."

Possible not cool, but there is quite a lot of mass left still.

Tuesday, September 24, 2013

Mosh improvement

The mobile shell mosh is very convenient substitute to the traditional ssh. I have now test used it in Linux and Android (JuiceSSH) for a couple of weeks. There is one issue that I have noticed. If the client process dies while connection is broken - most common scenario that you run out of battery - the server process does not receive any notification and remains running forever, or until manually killed or system boots. Even if old mosh-servers are running, new mosh sessions will start new server processes at server side every time.

I have now number of such mosh-server processes idling for more than a week. Of course I can terminate them manually, but that is not a generic solution. Consider a general purpose server with hundreds or thousands of users, like a university terminal server. Then you may start getting troubles with all the mosh servers doing nothing but consume memory.

I have recognized two possible solution alternatives. First is the dirty one, implement a timeout at server side that will terminate the server after certain idle period. That's against the philosophy of mosh and SSP, but if you make it user configurable, as a command line argument for example, then it's possibly not such a big crime.

Second solution is more sophisticated, by enabling recovery of existing connection by a new client session. This approach requires security credentials of the session to be stored locally at the client side.  Management of old and new sessions might then cause headache, and I understand that the development team of the mosh didn't chosen this solution. After all, originally it was just a research project at MIT.

I'd accept the timeout approach. The old good screen program takes care of maintaining my persistent session at server, thus I don't see a problem starting a new mosh session after 48 or 96 hours of communication break. I'd would be easy to implement, and would not compromise the security architecture by any way.

Saturday, September 14, 2013

Websocket and server-side scalability

Wireless Sensor Networks and Internet of Things are examples of application domains where even millions of end-points may be connected to the same server system simultaneously. If each of them are having a socket connection open all the time, either plain TCP socket or Websocket, it yields a server-side scalability problem.

Every now and then one may hear concerns of having high number of concurrent connections to the server. That's the major argument against use of Websocket that I have heard of. Opponents are typically suggesting polling-type of approach, which ruins the responsiveness of the system. That makes me study the topic further.

In general, any system implemented with any technology can easily handle up to 10k concurrent connections, but beyond that you may get troubles (C10k problem). TCP supports ~64'000 free ports per IP address. One workaround is to bind several IP addresses to the same computer. There are many service providers who claim to support millions of concurrent websocket connections. However, I don't think rely on proprietary solution of a single service is an approach generic enough. Anyway, advanced technologies like clustering is needed, which makes it more expensive for sure.

Let's take a look at the software architecture. Thread per connection approach is insane. Even an object instance per thread is overkill, in terms of memory consumption. Use of asynchronous methods is the only sensible approach for server implementation. But no matter how efficient your server implementation is, each open TCP socket connection consumes proportionally memory in the underlain operating system.

That makes me thinking use of UDP instead of TCP. It is unnecessary to consume the memory of the OS, if all connection-less connections are served through a single UDP port. But then we loose the beauty of identifying connections by IP and Port of each end. Some sort of application layer solution is needed instead.

State synchronization protocol (SSP)  is one possibility. It's based on UDP, and provides many advantages over TCP sockets, like client-side mobility (roaming), persistent connections over vague and temporary networks, and good response. At the moment of writing, the only known application that uses SSP is mosh, a replacement to the SSH terminal.

Mosh is available for many Unix/Linux variants and Mac OSX, but the SSP protocol is not published yet as a general purpose library. One can, of course, extract the protocol code from the open source implementation of mosh.

From embedded systems point of view, like WSN and IoT that I mentioned at the beginning, I think SSP kind of approach could be even better connection technology than Websocket. But as long as your system does not need to scale up to millions of concurrent connections, Websocket is still a good initial guess.