Saturday, December 28, 2013

Fully automated model train control

Santa visited our place this year as well, and brought a model railway to my son. How an earth the Santa got such a clever idea?

When watching the train running round and round the loop, I started to think about a full automatic control of model trains. Core of the problem is how to measure the exact position and speed of individual trains at any given time.

Little train engineer operating the new Christmas present and my old model train from the 80's.


When investigating the topic further, it look's like all the technology in use dates back to 80's. Digital Command and Control (DCC) is the most common digital control system of the day. DCC uses tracks to carry electrical signal, in form of pulse width modulation of alternating current provided to the locomotive. The technology makes it possible to instruct several individual trains running at the same track simultaneously. DCC is one-way only, thus control station can only instruct the locomotive - speed and direction - but does not get any feedback.

There are some hacks like Selectrix, which brings return channel, but that only enables identification of which locomotive is present at certain track section, not more. Recently, model train manufacturers have introduced entry-level battery powered train sets as well. As there is no galvanic contact through the tracks, such sets can use wireless control only. Earlier that was based on  Infra-Red, nowadays an RC of some sort. As the system is totally unaware where the train is going on, it's only up to the human operator to control each train - no chance for automation.

Traditionally control and signalling of trains - real ones - is based on fixed blocks of track, varying from kilometers to tens of kilometers in length. Only one train can occupy a block at a time, and one free block is required in between two occupied blocks. That's how it was 100 years ago, and that's how it is Today.

Moving block signalling is a newer system, which defines in real time a safe zone in front and back of a train, depending on the speed and location of the train. At the moment, the systems is use only in some subways and light rails, but no "real" railway uses it, much due to safety concerns, as the technology is not considered mature enough for trains traveling 200km/h or more.

Back to model trains. How the get the exact positions of a train? RFID is possibly the most obvious solution. It provides also identification of each train, and identification of each car also if necessary. However, that can recognize the location in certain spots only, and as such is only good for traditional style of reserved blocks type of signalling. Something more advanced is needed if moving block signalling is required.

Machine vision is one option to locate trains, and most likely it works well if all the tracks are running in the same level. However, in case of multi-layered rail systems with possible tunnels, it is very hard to arrange a proper configuration for vision systems to trace every train in real-time.

There is a solution. A model train can locate itself very easily by counting the sleepers (crosstie) it passes by. Fixed calibration points are needed of course, RFID can serve for that purpose. How to count the ties? It's rather easy with optical methods if there is enough contrast in between the tie and underlain material like ballast replica or plain plywood. If the top surface of the tie and the underlain surface are uneven, just a simple angle reflection detector is enough.

Well, it's not enough if the train by itself knows how many ties it has passed since previous calibration point (RFID). The train must somehow inform the control system as well, to let it know the overall situation. RF communication of some sort is needed to provide two-way communication. 2.4 GHz is a good choice. It is globally available frequency, and the RF characteristics are proper; the link length is adequate, and the data rate is well good enough for several trains to communicate simultaneously.

For the purpose, Zigbee, Wifi, Bluetooth and Bluetooh Low Energy are all equally good. Proprietary systems can work as well, but I'd prefer a standardized technology with several component vendors. The choice is only question of cost and energy consumption. Luckily, if trains are powered from tracks, the power consumption is not a major issue.

Now, it's all about the control software. All the existing model train control software, commercial or open source, are based on the traditional signalling system with reserved blocks of rail. Implementing a totally new control software with real-time location and speed control requires a group of talented people interested in the same topic, or someone seeing enough business potential on it. I really doubt the last one and count on only creating an open source community around the idea.

Anyone interested in the idea can contact me.

Thursday, December 19, 2013

Procket in Youtube

Our first YouTube video:

http://www.youtube.com/watch?v=xOhChE4zQ4o

It's about our production test solution called Procket. I'm quite proud of it.

Saturday, December 14, 2013

Boot to Qt, but how to install?

Digia released latest version 5.2 of Qt recently in December 2013. The Boot to Qt (Qt Enterprise Embedded) and Qt Mobile sounds interesting, thus I decided to check what's up, but that's not so easy.

Qt Development environment only supports 64-bit platform, however some dependency libraries requires 32-bit environment. First I considered using my Chromebook, but it has 32-bit ARM cpu, so it's out of the question. Then I decided to run virtual machine in my 64-bit Windows7 laptop. I installed Virtualbox and created a virtual machine running Ubuntu. However, for a reason unknown to me, it only accepted 32-bit version of Ubuntu and refused to install 64-bit version.

Finally I decided to make it safe and install native single-boot Linux into my old Intel CoreDuo 64-bit laptop. At the time I didn't had a proper Linux PC available, thus I had to use Windows application to create bootable USB memory stick. Ubuntu web page links to Universal USB Installer. The UUI tool has selection for multiple distributions. However, it does not ask for how many bits you have or wish to use.

By default, UUI installs 32-bit version of Ubuntu into the memory stick. I didn't noticed that, untill I had the OS up and running in my laptop and trying to install Qt. So, once more re-installing Ubuntu in order to have proper 64-bit variant. As Qt installation instructions says Ubuntu 12.04 LTS and later are supported, I selected the latest 13.10 version of desktop Ubuntu.

Here comes the tricky part, as I said at the beginning. Even if Qt development  environment supports only 64-bit host platform, it requires some libraries working only in 32-bit environment. In order to do that, there is library called ia32-libs in Linux, which is one of the prerequisite of Qt installer. However, that library is removed from the 13.10 release!

Here is a posting explaining the reason: "The ia32-libs package was a hack to get 32-bit packages installed on a 64-bit installation. Since Ubuntu version 11.10 (Oneiric), Multi Arch has been added. One of the objectives for it is removing the ia32-libs package. Instead, you have to install the 32-bit libraries of a package with: sudo apt-get install package-name:i386 "

So there is a way to install 32-bit libraries in 64-bit Linux, but the Qt installer just doesn't know how to do that! The installation crashes after an hour or so from the start. So, I end up re-installing Linux into my PC once again, this time the 64-bit Ubuntu 12.04 LTS, which appears to be the one and only supported platform. Let's hope better luck this time.

What is the lesson here? Claiming the PC world is 64-bit only is as reasonable as claiming the Internet is IPv6 only. Even if 64-bit and IPv6 are good goals, one can not decline the legacy 32-bit and IPv4 world still exists strongly. If ever Qt development environment would support 32-bit platforms in the first place, I wouldn't have had the hassle at all.

Edit Dec. 15:
Before re-installing Ubuntu, I tried to re-install Qt environment . This time the installation succeeded, thus looks like it was just a random fail at the first try. Awfully slow the Qt Creator is. Don't know yet why.

Thursday, December 12, 2013

Wifi vs. Bluetooth in HMI connectivity

When designing a user interface for a building automation appliance, often there arise a question should I provide a local UI at all as majority of my target customers already have a tablet or smart phone of some sort which can be used for the purpose?

If you decide that a non-local mobile UI will be provided, as only or as a supplement, number of new questions arise.  If I select Apps approach, which platforms should I support, and how many Apps for different platforms can I support? At the topic of the posting; should I select Wifi of Bluetooth as the technology for the connectivity in between the appliance and the UI terminal?

Traditionally I have favored Wifi, but recently I have considered that perhaps Bluetooth is better in some cases, especially from usability point of view. I have a personal example here:

In my car, I have an OBD2-Bluetooth dongle and in my smartphone I have an OBD application installed. Whenever I'm sitting in my car, it is very convenient just to take the phone from my pocket and launch the App simply by tabbing the icon, and then I have instant access to measurements and diagnostics of the vehicle. Very easy to do. After initial bluetooth pairing I do not need to do any other configurations when using the function.

Recently, I purchased an Android/iPad controlled wifi RC-car with live video streaming. At local retailer it costs only 30€. The toy has Wifi access point embedded. Yes, Wifi has bandwidth enough for live video stream. But on the other hand, the usability is not that good.

Prior launching the App of the car, first I have to manually change the Wifi network in order to have connection to the car. That's doable, but more inconvenient is that whenever I'm having the toy car powered up, my phone tends spontaneously connect to its access point instead of my home wifi. So, when I'm then supposed to do some internet surfing, I find myself having connection only to the car. And once again changing the network manually.
Wifi Camera Buggy from BeeWi
If I'd have a direct wireless access to my air conditioning, ventilation, heating, etc device, I'd definitely prefer just simply launching an App, instead of struggling with the network configuration first. Of course, there areissues like location of the connection point, regarding to signal strength and link distance, etc. which are to be taken into account when doing design choices.

Why not to use the Wifi of the appliance in device mode and connect to the existing Wifi infrastructure of the home? Well, that's one possible way of thinking. However, configuring wireless network when having wireless network connectivity only sounds like a shooting your own leg. I could only imagine how vendor of such product sinks into the flood of customer support request: "I tried to configure the network, and now I have lost connection to my device altogether. What I'm suppose to do now?"

I claim that from usability point of view, Bluetooth can be better option that wifi. The downside is that in generic case, Web UI can not be provided over Bluetooth, but a specific App is needed instead.


Saturday, December 7, 2013

Industrial Internet for Test systems

Last week at Nordic Test Forum in Tallinn, Espotel and Virinco released co-operation in providing remote management for production test systems. Press release (pdf).

Virinco has spent 10 years developing a world class tool software WATS for test results collection, data analysis and presentation, and remote maintenace of test stations. WATS is available as cloud service SkyWATS and  on-site installed service. Greatest added value of WATS is that brand owner sees what's going on in the production, in reality and real-time.

Espotel provides state of the art production test systems under Procket brand. Together with WATS remote management, Espotel can provide turn-key solutions for production testing. Together, we have several promises:
  • Your product: Tested in production
  • Your data: Analyzed in real-time
  •  Your tester: Maintained

Why should product owner own the test system in the first place? Especially if production is outsourced to Electronics Manufacturing Service (EMS).

First of all, own tester gives flexibility and independence over EMS. Production is easy to transfer from one facility to another, and to multiply to several facilities administered by different EMS companies. And what's most important, quality of production is guaranteed, no matter where it happens.

Owned test system provides also possibility to have real-time visibility to production, and possibility to improve product design if necessary. Why does that matter if we pay to the EMS only for functional units delivered? Well, the customer pays for the discarded products as well, one way or another.  The better the yield the lower margin the EMS needs to maintain profitable business.

Sunday, November 10, 2013

Embedded Conference Scandinavia

The ECS'13 was held last week for the 8th time in Stockholm. Actually, this year it was moved to a northern suburb Kista, from Älvsjö which is a southern suburb of Stockholm. Kista is titled as the Science City or Silicon Valley of Sweden, with 70'000 technology jobs in a compact area.

The move was definitely a good thing. Total number of visitors was increased by 50% from previous year. I assume that's mostly thanks to moving to Kista, where people can easily pop-in from surrounding companies for just a couple of hours. Instead of spending whole day by traveling back and forth to Älvsjö.

Embedded Conference is becoming the leading embedded event in Northern Europe. The trade fair is very well focused, all technology provided is relevant, and all visitors are more or less potential customers. The exhibition hall was rather crowded  from 10 am until 4 pm (Opening hours 9-5), and according to organizers, many conference talks were fully booked.

I had my presentation at the demo square, in the middle of  the exhibition hall. I covered many of the same subjects I have discussed here in my blog earlier. I had nice amount of audience, but more importantly we had plenty of visitors on our booth. Overall, the event was very succesful and we will definitely join next year as well.

What comes to the technology, looks like Sweden is becoming towards the same way how things have been done in Finland already for a decade. Linux and ARM are in embedded mainstream replacing Windows and Intel. Traditionally, Swedish consulting companies have provided more on-site consultants than providing turn-key product development services in supplier's own premises. Now, according to our competitors, also in Sweden customers are becoming more interested in off-site consulting services (product development).

Tuesday, October 29, 2013

Linux technology platforms

In the previous posting I mentioned 3rd and 4th generation Linux technology platforms, thus I decided to open up the concept a little bit.

Jhumar, 4th generation Linux technology platform.


In custom product development, it does not make sense to make everything from scratch every time. Use of reference design (technology platform) does not only reduce development costs, but essentially minimizes project schedule and technology risks from customer point of view. Verified CPU design with known characteristics combined with the flexibility of full custom electronics design is the way selected by Espotel.

The 3rd generation Linux plaform was called Jive, and is based on Atmel's AT91SAM926x CPU w/ARM926 architecture running at 240 MHz. The platform is not in production use anymore (no new designs), thus no more word about that.

The 4th generation Linux platform is called Jhumar, it has Freescale i.MX287 CPU w/ARM926 core, running at 450 MHz. 256 MB RAM and 1GB Flash expandable with micro-SD card provides sufficient memory reserves. The i.MX28 CPU is included in Freescale's product longevity program with 15 years support, dedicated for medical and automotive applications. The product was released in 2010, thus it has guaranteed availability until year 2025.

The base design supports 4,3" and 7" displays with WVGA resolution (limited by the integrated graphics controller of the CPU). What differentiates Jhumar from COTS tablets is the selection of wired and wireless interfaces: Ethernet, CAN, USB host+device, serial, WiFi, Bluetooth, Bluetooth LE, Zigbee and 868MHz RF. Sub-set of interfaces is always selected upon requirements of the specific application.

The 5th generation Linux platform is based on Cortex-A8 and the 6th generation has Cortex-A9 multi-core CPU.

Sunday, October 27, 2013

Long live the ARM9


ARM9 architecture is not the high-end of embedded, but it's still feasible for many applications. The latest Lego Mindstorms is a good example, which I wrote about in my previous posting. ARM9 processors are inexpensive, low power and well available from many vendors. Capacity and performance is good enough to run smooth graphical UI and to provide state of the art remote connectivity.

I have collected here some examples of products which are in production at the moment, with ARM926 core similar to the Mindstorms EV3 Brick. How do I know? Well, they are all designed in my company for our customers. All the mentioned projects have benefited from our 3rd and 4th generation Linux technology platforms, which have been used as reference design for the custom electronics.

Consumer electronics: Central unit of building automation.
Industrial automation: Quality control tool of welding machine.
Medical device: Remote controllerfor
hospital operation room.
Vehicle automation: Operation panel of
integrated wheel loader scale.
Industrial automation: High accuracy field calibrator
Medical diagnostics: CRP analyzer
Industrial automation: Configuration tool and data logger
for process industry flow measurement sensor.




Friday, October 25, 2013

ARM + Linux + LabView = Mindstorms

Lego released EV3, third generation of Mindstorms programmable robotic kits on September 2013, which is referred as the most hackable Lego ever. Core technologies of the concept include ARM CPU, Linux operating system and LabView based programming environment. What a cool toy!

My first Mindstorms robot design.
Programmable Brick, the central unit of EV3 is actually pretty advanced embedded system. The heard of the Brick is Texas Instrument's Sitara AM1808 SoC CPU w/ARM9 core, running at 300 MHz. The device has 64 MB RAM and 16 MB of integrated Flash, expandable up to 32GB with SD memory card. Connectivity is provided via USB device interface for programming, Bluetooth for mobile apps (Android, iOS), and USB host port for Wifi dongles.

The programming environment of Mindstroms is based on LabView, with nice graphical frontend. Programming is intuitive and easy. There are some program flow elements like loop and switch, but many advanced techniques are missing including synchronization of parallel execution flows, event handling, and message passing. Well, after all it's a toy intended for youngster.

EV3 runs stripped down version of Ångström distribution. Source code of the Linux is of course published as open source. There are instructions in Github how to build the firmware of the Brick by yourself.

The previous generation NXT version of Mindstorms is well supported by many programming languages, list of languages is available in Wikipedia. As the EV3 was released just recently, there are only few language ports available yet. One of the few is Java, with help of leJOS project (Java for Lego Minsdstorms). C# based Monobrick provides also communication interface to PC. Support in official LabView is expected during first half of 2014.

I just purchased an EV3 and constructed my first Mindstorms robot. My intention is to include EV3 robot as a part of my web technology demonstration at Embedded Conference Scandinavia, November 5-6 2013, in Stockholm. The robot will wave couple of Freescale Freedom evaluation boards, in order to generate non-static accelerometer data to be transmitted to cloud and further to mobile user interface. (In the picture above, there are Arduino boards, as I didn't had FRDMs in hand at the time).

Why Mindstorms? Well, in fact the EV3 utilizes all the same core technologies which are de-facto key technology choices of my company: ARM, Linux and LabView. Well, LabView is not used for application programming a top of Linux, but in implementation of production test systems.

Thursday, October 24, 2013

QR vs. RFID

Why to use QR code as RFID can do all the same and much more? That's true, but it does not mean QR code would be obsolete. There are couple of arguments pro QR.

First of all, number of devices capable of reading QR codes exceeds multi-fold the number of devices capable of reading RFID or do NFC. Potentially every smart phone, tablet, laptop and any device with camera can do QR reading. Only very limited number of mobiles or specific devices can do RFID/NFC communication.

QR code is also ultimately cheap. Even if RFID tags are inexpensive nowadays, I haven't seen any newspaper having RFIDs embedded on pages. But I have seen many magazines having QR codes printed on pages. Sticker with QR codes printed are and will be cheaper than stickers with RFID chip embedded.

QR code is static and one way. Thus there is and will be rationale for both technologies to exists side by side.

Wednesday, October 16, 2013

Uses of QR code

Everyone knows QR code, possibly most of often at the context of advertisements, a link that leads to a company web site for more information. QR code can do much more than that. In addition to URL, the code can store other information like visiting card, SMS of free text. There are couple of handy use cases in system design of embedded stuff.

URL of the blog. Try it with your smart phone.
Deployment
Let's  assume we need to install a number of sensors or actuators, wired or wireless, into a building or outdoor environment. We don't want to pair the nodes in advance, as then we must pay close attention in which unit to install where. All units are equal as long as they are in the box. Only at installation time they get the association to the place and possible role. How to do that in most convenient matter?

There may be a serial number printed in every unit. Then the installation guy must manually copy the serial number to paper or electronic device. Manual and error prone process. If we put a 1D bar code into each unit, then the process of identification can be automated, but a specific tool with laser bar code reader unit is needed.

Let's stick a QR code with a unique identifier into each unit. The installation person can then use his or her smart phone or tablet to read the code, and immediately associate the specific sensor into the installation location and deliver the info in the system database with help of the mobile data connection of the device. In case of outdoor environment, GPS data can be combined to create geospatial mapping of the installation.

Authorization
Let's consider building automation, for example some sort of equipment installed inside of a building, like heat pump, ventilation unit or A/C. Those are rather sophisticated devices nowadays, with capability to offer remote connectivity, possibly with help of cloud solution. Use of such remote user interface should be as easy as possible, simultaneously providing adequate security and confidentiality.

If a person has physical access next to the device, then we may assume he or she has right to access and control that equipment, at least in some extend - not a completely outsider. If there is a QR printed at the side of the device, that information may tell to the cloud or even to the device itself that the person operating the mobile terminal which delivered the code has access right to the device.

Encryption
In case of asymmetric cryptography, the QR code may contain the public key of the server side. If it is printed and mounted at the assembly line, it most probably is authentic and not altered. With help of the public key, terminal unit may then send it's own public key in encrypted format to the server, and then they can communicate in completely secured fashion, and no cryptographic credentials was ever exchanged in plain text format.

Sunday, October 6, 2013

What makes Jolla so special?

Jolla rises from the ashes of Nokia Meego, having second pre-order round ongoing. Jolla differentiates itself from other brand mobile manufacturers in many ways.

First of all, Sailfish, the operating system based on Mer is 100% open source. No proprietary binary code included. If you doubt what the device do, you can always check it from the original source - the source code. That makes it difficult to hide any call-back-home or other nasty features among functionality of the software.

Secondly, as Jolla states they have no operations or servers physically located in the US, and as such they are not oblicated to disclose any user information to NSA. In the past, technical intelligence programs like Echelon were dedicated to listen to trunk networks. As all traffic is nowadays more or less encrypted, it's easier to access the data at  the end-point of communication, where the data is available in decrypted format, and that's where PRISM hits.

Third, Sailfish will provide superior compatibility with applications from different platforms with little or no modifications, including Meego, Android, Unix, Linux and HTML5. That's not only because Jolla-sailors think they can do it, but especially to gain the ecosystem fast. Jolla targets to 500K+ apps at the beginning.

There are many appealing aspects in the Sailfish concept, like support for many programming technologies including Qt, HTML5, Android and many more, and the unique software development and emulation environment, which makes it possible to develop and test Sailfish applications prior existence of the actual hardware, and in any host OS.

From the history we know that technical superiority by itself is not always defining the winning solution. The concept needs much more, like believable and creditable story, the ecosystem, and of course, paying customers. Jolla has still long way to go, but according to some analysts, Sailfish has good chance to beat the market share of Windows Phone. The growth is expected to happen especially in the China and other Asia, where Jolla provides a non-US alternative at the mobile market.

Friday, September 27, 2013

JavaScript for Embedded - Does it make sense?

Espruino - or JavaScript for Things, as they call it - just got its kickstarter funding collected. It's a JavaScript interpreter for MCUs, to be released as an open source SW/HW. First demonstrator already exists and now they are finalizing design and documentation, and to manufacture the board in volumes.

JavaScript interpreter for Embedded is what I have been waiting for. Don't have hands-on experiences with it yet, but it sounds pretty promising. For the Espruino HW board, STM32F103 Cortex-M3 MCU with 256k Flash and 48k RAM was selected. That's way below the bare minimum requirements of Java ME Embedded, that I discussed about in my previous posting.

Now we have three competing approaches for MCU systems:

  1. Native compiled C/C++, like mbed for example
  2. Java ME Embedded
  3. JavaScript for Things, Espruino or similar
Interesting research question is comparisong of the different programming technology approaches from following perspectives (including, but not limited to):
  • Overall performance and resource consumption
  • Real-time behavior
  • Energy efficiency
  • Connectivity w/Internet and Cloud
  • Reliability and security issues 
  • Quality, including testing testing solutions, etc. 
  • Productivity of software engineering 
  • Ecosystem support
It's really fascinating to see that there is plenty of activity going on around embedded and MCU field. It's definitely not a dead zone.

Wednesday, September 25, 2013

Java for Embedded - Does it make sense?

Freescale and Oracle made a press release they will jointly push Java to IoT. This makes me interested, does it make sense to use Java in Embedded at all?

I have to admit I have always been suspicious about the usefulness of Java. Historically due to the performance reasons, and nowadays due to the fact that there exists other high-level programming languages that provide better productivity, like Python and JavaScript, just to mention a few.

In the past, Java processors executing bytecode native in hardware were expected to solve capacity and performance constraints in embedded systems. Even if number of such Java processors exists Today, that technology never really entered into the mainstream, and nowadays you seldom hear anyone talking about.

Java ME Embedded is optimized for ARM architecture. The datasheet says standard configuration requires 700KB RAM and 2000 KB ROM, and it is possible to tear it down to 130 KB RAM and 350 KB ROM. I'm not convinced. In MCU systems it is typically crucially important to have full control over time critical execution, and energy efficiency is usually an issue.

Both in super-scalar server systems and resource constrained embedded systems asynchronous programming is more efficient way of using the available resources. Strict object formalism just does not make sense. Loosening the object paradigm then ruins the fundamental idea behind Java.

For MCU systems, I'd choose something like mbed instead. And in case of CPU systems, perhaps JavaScript and Node.JS coud do the job. Actually, that is the architecture I selected for a trade fair and sales demonstrator which is under constructions. Our engineers surprised my by the productivity they gained, and they reported being happy with the technology by themselves too.

I'll discuss more about the demo in next postings.

Edit Oct. 15th:

This seems to be a hot topic in the community as well. First Dr.Dobbs published an article on Oct. 8th: If Java Is Dying, It Sure Looks Awfully Healthy
http://www.drdobbs.com/jvm/if-java-is-dying-it-sure-looks-awfully-h/240162390

Then they received tons of feedback, and wrote a response on Oct 15th: 1000 Responses to Java Is Not Dying
http://www.drdobbs.com/jvm/1000-responses-to-java-is-not-dying/240162680

Refressing to read such a discussion. There is one argument that I want to quote: "Java only appears to be dying because the cool kids prefer other languages". That's most probably true. But then back to my original topic. Terrance Bar from Oracle put's it this way:

""There are 9 Million Java developers in the world, compared to 300,000 embedded C developers. On top of that, more and more products are coming from small startups without specialised embedded knowledge. They do know Java though."

Possible not cool, but there is quite a lot of mass left still.

Tuesday, September 24, 2013

Mosh improvement

The mobile shell mosh is very convenient substitute to the traditional ssh. I have now test used it in Linux and Android (JuiceSSH) for a couple of weeks. There is one issue that I have noticed. If the client process dies while connection is broken - most common scenario that you run out of battery - the server process does not receive any notification and remains running forever, or until manually killed or system boots. Even if old mosh-servers are running, new mosh sessions will start new server processes at server side every time.

I have now number of such mosh-server processes idling for more than a week. Of course I can terminate them manually, but that is not a generic solution. Consider a general purpose server with hundreds or thousands of users, like a university terminal server. Then you may start getting troubles with all the mosh servers doing nothing but consume memory.

I have recognized two possible solution alternatives. First is the dirty one, implement a timeout at server side that will terminate the server after certain idle period. That's against the philosophy of mosh and SSP, but if you make it user configurable, as a command line argument for example, then it's possibly not such a big crime.

Second solution is more sophisticated, by enabling recovery of existing connection by a new client session. This approach requires security credentials of the session to be stored locally at the client side.  Management of old and new sessions might then cause headache, and I understand that the development team of the mosh didn't chosen this solution. After all, originally it was just a research project at MIT.

I'd accept the timeout approach. The old good screen program takes care of maintaining my persistent session at server, thus I don't see a problem starting a new mosh session after 48 or 96 hours of communication break. I'd would be easy to implement, and would not compromise the security architecture by any way.

Saturday, September 14, 2013

Websocket and server-side scalability

Wireless Sensor Networks and Internet of Things are examples of application domains where even millions of end-points may be connected to the same server system simultaneously. If each of them are having a socket connection open all the time, either plain TCP socket or Websocket, it yields a server-side scalability problem.

Every now and then one may hear concerns of having high number of concurrent connections to the server. That's the major argument against use of Websocket that I have heard of. Opponents are typically suggesting polling-type of approach, which ruins the responsiveness of the system. That makes me study the topic further.

In general, any system implemented with any technology can easily handle up to 10k concurrent connections, but beyond that you may get troubles (C10k problem). TCP supports ~64'000 free ports per IP address. One workaround is to bind several IP addresses to the same computer. There are many service providers who claim to support millions of concurrent websocket connections. However, I don't think rely on proprietary solution of a single service is an approach generic enough. Anyway, advanced technologies like clustering is needed, which makes it more expensive for sure.

Let's take a look at the software architecture. Thread per connection approach is insane. Even an object instance per thread is overkill, in terms of memory consumption. Use of asynchronous methods is the only sensible approach for server implementation. But no matter how efficient your server implementation is, each open TCP socket connection consumes proportionally memory in the underlain operating system.

That makes me thinking use of UDP instead of TCP. It is unnecessary to consume the memory of the OS, if all connection-less connections are served through a single UDP port. But then we loose the beauty of identifying connections by IP and Port of each end. Some sort of application layer solution is needed instead.

State synchronization protocol (SSP)  is one possibility. It's based on UDP, and provides many advantages over TCP sockets, like client-side mobility (roaming), persistent connections over vague and temporary networks, and good response. At the moment of writing, the only known application that uses SSP is mosh, a replacement to the SSH terminal.

Mosh is available for many Unix/Linux variants and Mac OSX, but the SSP protocol is not published yet as a general purpose library. One can, of course, extract the protocol code from the open source implementation of mosh.

From embedded systems point of view, like WSN and IoT that I mentioned at the beginning, I think SSP kind of approach could be even better connection technology than Websocket. But as long as your system does not need to scale up to millions of concurrent connections, Websocket is still a good initial guess.





Tuesday, August 27, 2013

Internet of Things with help of ARM

Today, ARM Ltd announced they have acquired Sensinode Oy, a Finnish company providing software technology for Internet of Things (IoT). In the press release, ARM says they will make the technology available to developers through the ARM mbed project, to enable easy creation of IoT applications.

Sensinode is most known for the Nanostack implementation of the 6LoWPAN standard. They are also known for their active contribution to standardization at IoT scheme. At Espotel, I have been involved as the project manager in a customer product development project where Nanostack was applied in a wireless sensor network for industrial automation application. 

The mbed development platform is intended for fast creation of products based on ARM microcontrollers. It consists of software and hardware development tools (SDK, HDK), online IDE and support of the community. At the moment, there are 11 COTS MCU boards readily available from different vendors, ranging from Cortex-M0 to M4 core.

The mbed Compiler is a C/C++ IDE provided as a web app, thus most operating systems and browsers are supported as the host environment. Unlike with the BeagleBone C9, your mbed projects are stored in the cloud, not in the development platform itself. The IDE provides version control by default, but also rises some security concerns. For hobbyist and experimentation, that's maybe not a problem, if the code is intended to be published anyway.

Try it by yourself

 

I did some quick exercise with the mbed and a Freescale Freedom board KL25Z with Cortex-M0 core. In less time than what it takes to write this posting, I got my first code up and running in the target. There are just a few steps to follow, according to the instructions.
  1. Connect the board via USB cable meanwhile pressing the reset button. The device gets mounted as a USB mass storage in bootloader update mode.
  2. Update bootloader by drag-and-drop the image, then reset the device
  3. Open the IDE by clicking mbed icon at the device flash
  4. Write your code, compile, and save the binary to the device flash 
  5. Reset the device, and you're running you new code !
Steps 1 and 2 needs to be done only once, to enable mbed on your target.

FRDM-KL25Z blinking blue and green LED.
After my experiment, I'm really fascinated by the concept of mbed. Together with the proven wireless internet connectivity provided by Sensinode, I believe what ARM promises, the IoT is one step closer.

Monday, August 26, 2013

Embedded Conference Scandinavia 2013

Embedded Conference Scandinavia in Stockholm is the leading event in Nordic countries focusing on embedded technology. The conference has grown year by year, and this is the 8th time it is organized. This year the event is moved to Kista from Stockholmsmässa where it was organized earlier. For the first time, the conference is organized jointly with M2M Summit Scandinavia, which is a branch of the M2M Summit in Germany, organized by M2M Alliance.

My presentation proposal to the conference was accepted, and I got time slot 13:30-14:00 at central demo square, on Wednesday the 6th of November. The topic is "Embedded connectivity with HTML5". This is the 4th time I'm having talk at ECS. Last time I discussed about implementing embedded user interfaces with help of HTML5.

Espotel will have a booth at the conference, and you're welcome to meet us at any time within the two days event.

ECS'13, 5th-6th of November, Kista Stockholm

Friday, August 16, 2013

SiLabs + Energy Micro

Today, I met SiLabs people, who presented their new MCU portfolio. As you may know, Silicon Labs accuired Energy Micro. The deal was closed just only 6 weeks ago. Energy Micro was founded by a Norwegian Geir Førre, who also founded Chipcon, which was sold to Texas Instruments. What a success of serial entrepreneur! I had once a chance to listen to Geir in Stockholm, and I have to admit that he is a wise guy.

I was more or less aware of what EM has made with the silicon, but I didn't knew their development tools offering. Simplicity Studio provides quite nice tools for energy profiling and optimization. Competitors are narrowing the gab in MCU energy consumption figures, but according to my understanding, they don't provide similar level of development software support for energy optimization. So, from my perspective, that's the #1 competitive advantage that SiLabs has in EMF over competitors.

SiLabs promises to launch an EFR family of integrated MCU+RF next year. That will contain EMF core + radio of some sort. Only preliminary numbers are provided for RF characteristics yet. However, SiLabs has good reputation in discrete radios, at least in Sub-GHz RF chips, which are extensively used by my company in customer design. Thus I expect something competitive to approach market.

Wednesday, August 14, 2013

WebSocket experimentation with BeagleBone



I really like the concept of BeagleBone and Cloud9 IDE. I have my board connected directly to my local intranet with Ethernet cable, so I can access the IDE and my saved project, and continue working at any PC in my household, no matter whether it’s the ChromeBook, my company Windows Ultrabook, or my wife's MacBook. The very same user experience available with zero-installation needed. And my project is secured, as it is stored locally at the mass memory of the board itself. No one can access it from outside of my intranet. Of course, in case of professional software development, version control, collaboration, backups, etc. needs to be considered separately.

The default Ångström Linux image installed on BeagleBone, does not have the WebSocket Javascript library socket.io installed by default. At GitHub BoneScript Socket.IO page there are instructions how to install socket.io.  The whole project was initially committed only two weeks ago. There is also a nice, but rather complex example of using WebSocket to remote control LEDs, which needs to be externally attached to the BeagleBone board. NB! Some hardware hacking required.

I wrote a canonical code example demonstrating how to control an onboard LED at BeagleBone. So, no hardware hacking is needed to test the code.  JavaScript uses JSON (JavaScript Object Notation) to exchange data. In this example, very simple JSON messages are delivered over the WebSocket connection to control and confirm the LED state. BoneScript functions are used for hardware access.

JSON messages in this example have the following simple syntax:

{"name":"led","args":["on"]}

The demo consists of two files located in sockserv folder. Socketserver.js, which is the Node.JS executed at BeagleBone, and socketclient.html which is the web page delivered to web browser upon request, containing HTML and JavaScript code for communication with the Beagle. The architecture equals to the scenario #1, presented in my previous posting.

 Let’s take a closer look at few key functions.

Server side
This is how the "web server" is implemented. Whenever a client is connected and sends any GET command, the static socketclient.html file is read from local the flash disk at beagle, and send to the browser.

function httpserver (req, res) {
  fs.readFile('sockserv/socketclient.html',
  function (err, data) {
    if (err) {
      res.writeHead(500);
      return res.end('Error loading index.html');
    } 
    res.writeHead(200);
    res.end(data);
  });
}
When WebSocket connection is established, and some data received, the following callback will parse the message, whether the LED should be switched ON or OFF. As an acknowledgement, the same message is transmitted back to the client.

io.sockets.on('connection', function (socket) {
  socket.on('led', function (data) {
    console.log(data);
    if (data == 'on'){
        b.digitalWrite(led, b.HIGH);
        socket.emit('led', 'on');
    }else{
        b.digitalWrite(led, b.LOW);
        socket.emit('led', 'off');
    }
  });
});

Client side
At the socketclient.html page, there is one button to toggle the state of the LED. When the button is clicked, this function transmits JSON message over WebSocket to server.
     
function toggleLed(){
    if ( document.getElementById("ledButton").value == "Switch On" ) {
        socket.emit('led', 'on');
    } else {
        socket.emit('led', 'off');
    }      
}
If acknowledgement is received, it it processed by this callback function, which changes the state of the button, confirming successful operation to user. If you see the label of the button changed, you know the message has travelled back and forth.


socket.on('led', function (data) {
    console.log(data);
    if (data == 'on'){
        document.getElementById("ledButton").value= "Switch Off";
    } else {
        document.getElementById("ledButton").value= "Switch On";
    }
});
There is one flaw in beauty in this code. Once the page is loaded, the button is not initially synchronized with the actual state of the physical LED. Only after clicking the button for the first time, the UI and the LED are in sync. I made this decision by purpose, as I want to keep the example as simple as possible. 

 
Web UI of the demo.


Disclaimer: I’m not a professional JavaScript programmer, actual this was my first Node.JS code written ever, and only few times tried JavaScript before. Thus, the code may not be optimal, and something I may have understood wrong. I warmly welcome any feedback to correct faults, and to make it pedagogically more correct. Well, it just looks like working OK.