Thursday, November 17, 2011

SVP - Service View Presenter

During my first encounter with
the Model View Controller and later with the Model View Presenter pattern I had some hard time understanding what's behind the Model. Having practiced test driven development for years I will now summarize my experience and will describe a new pattern called Service View Presenter or SVP.

The MVC pattern has emerged long ago in an attempt to make user interface code modular and
better structured. Then the MVP pattern came as a refinement in an attempt to make MVC more testable. In its very essense MVP does not allow the View to interact with the model directly. Hence the view becomes a purely declarative piece of code (i.e. no if/else/switch statements and only foreach loops). Thus all logic resides in the Presenter and the view is being mocked out during unit testing. Let me now go into some more details about the roles of each of these elements.

The View
In MVP, the view is concerned with
  • Declaring how the UI should look
  • Capturing user interaction
  • Modifying the view on request from the Presenter

The Presenter

The presenter then has the responsibilities of

  • Handling user interactions
  • Manipulating the view in response
  • Interacting with other objects to perform calculations or
    data access

The Model
Having asked several people the question "What is the model?" I usually receive different
responses, like:

  • This is a pure data structure/holder class (e.g. a DTO)
  • It's a data holder with embedded logic for e.g. consistency checks, serialization etc (which must have their own tests)

These sound reasonable, but IMO misses an important point. Why care so much about the model if it is either a pure data holder? So, for the time being, let's define the Model like this

The model in MVP is a set of objects that the Presenter interacts with in order to perform calculations and move data from/to the view.

We can then divide these objects into two categories


These are objects of static,stateless nature, that operate solely on the input data and returnresponse. Their code lives in the same execution environment (OS process, HTML page, JVM ...) This makes the libraries easy to test,because they don't depend on the environment.

Side note: Another nice effects is that they scale well (because they are stateless aka functional)

This means that:

  1. Libraries are supposed to have unit tests of their own
  2. We don't need to mock them


The majority of these are objects that let the presenter interact with external data stores. For example:

  • Query a database table to get the user's preferences
  • Call a REST service to post a new photo in an image gallery
  • There could be services that are stateless, much like libraries. For example a Weather service.

Most of the the code of a service object lives outside of the execution environment of the presenter and is accessible through some protocol, such as HTTP, RMI, CORBA etc. There is a small part of the service code living in the same environment that is used to make the actual call
through the respective protocol. For example:
$.ajax({ ... })

There could be also services that have been introduced in order to isolate the components in a large piece of software. For example in multi-view screens the different view often communicate through a message bus. Then the message bus is also a service, even though it does not communicated with remote hosts. By definition, services shall not be called during a unit test, because unit tests shall be independent from the environment. That is why in a test case they are mocked out and usually also stubbed, to simulate different responses.

SVP - Service View Presenter
Therefore, we come to a pattern called SVP, in which we aim to structure our code around:

  • Services - as described above
  • View - as in MVP
  • Presenter - as in MVP
Libraries and data holders (DTO) still exist, but we don't care much about them, because they do not interfere with our testing process. Want to use a library inside your presenter? That's fine. As long as the test passes and the presenter code is covered 100% you can use whatever library you want. Here is the essence:
  • Only presenter is subject to unit testing
  • Presenter is injected with the view and the services (e.g.
    through a constructor or a DI library).
  • View and services are mocked and stubbed in order to simulate
    the test scenarios
  • Data holders and Libraries remain untouched, but as a side
    effect are covered by the test as well

This pattern is just slightly different than MVP in that it refines the Model into a Service and specifies the Libraries and data holders. I hope it will be useful, especially to the young software developers trying to find their way around MVC/MVP

Wednesday, November 16, 2011

Windows 7 for XP fans

The problem
Having upgraded to Windows 7 recently I found myself in a very unpleasant situation. I am doing a lot of text processing, e.g. programming, answering to lots of emails, reading documentation etc and the appearance of text makes a difference. By default, Windows 7 comes with a feature called clear type, that is supposed to improve the readability of text on "modern LCD screens". Quite sadly so, for me it works exactly the opposite way. After working for half an hour, I immediately felt the fatigue in my eyes and I got a headache. At that point in time I decided to stop any work until I manage to fix the appearance of text or to dump Windows 7 altogether.

The solution
Searched the net for a while and came to several useful tips
  • Disable all Visual effects (Control Panel > Search for visual effects > Select "Adjust the appearance and performance of Windows" > Either select "Adjust for best performance" or "Custom" and disable all checkboxes). This will effectively disable Aero as well and the computer will appear more or less like Windows XP
  • Turn off Clear Type (Control Panel > Search for clear type > Select Adjust Clear Type text > Uncheck the checkbox and click next until the end of the wizard

Apart from that, I had to tune the programs I use most, to use better fonts

  • I am using Eclipse fonts - Go to Window > Preferences > Search for fonts > Set the font on the editors that you use (e.g. Java, JavaScript, Text, C/C++) to Courier New 10 points
  • I am using Firefox 8 for browsing, so I had to adjust the fonts there as well. - Go to Options > The Content tab > Then in the Fonts&Colors section click Advanced. Use "Arial Unicode MS" for the Serif and Sans Serif fonts and Courier New as Monospace font

Another minor thing is switching the office theme to Black, so that Outlook etc do not glow to much in your eyes.

With all that set, I now enjoy a much better text experience. Hope this helps you too :)

Friday, October 21, 2011

Using non ASCII symbols in Boost file names

The boost filesystem library has a lot of goodies to make the life of the cross-platform developer easier and looks like a natural choice of the contemporary C++ developer.

One of the things that's been bugging me recently though is how to use non ASCII file names in paths on Windows. The problem that I encountered was that boost would throw this exception:

boost::filesystem::create_directory: The filename, directory name, or volume label syntax is incorrect:

every time I try to e.g. create a directory with a non-ASCII name (e.g. something in cyrillic). After digging the net for a while I've found this in the Boost filesystem documentation

The default imbued locale provides a codecvt facet that invokes Windows MultiByteToWideChar or WideCharToMultiByte API's with a codepage of CP_THREAD_ACP if Windows AreFileApisANSI()is true, otherwise codepage CP_OEMCP.
This was what I was looking for. Setting which code page to use for the conversion is then done with:

Also you have to use a wstring instead of a regular string to construct the boost::filesystem::path object. A very useful page that explains the internationalization aspects of char and wchar_t is this excellent answer.

Tuesday, April 19, 2011

It's never late to shoot your self in the foot

Yesterday I decided to clean up some of the components I thought I didn't need on my Kubuntu laptop. Amongst the targets were the Apache httpd server and Mysql servers and clients. All went fine, but when I started the machine today KDE did not load. I tried rebooting but it did not help too. I couldn't remember how to connect the wireless from command line so I got wired connection to my router (later on I found this useful post that explains how to do it). In order to start Firefox and look for a solution to my problem I started a console, using Ctrl+Alt+F1, then exported the display with this command:
export DISPLAY=:0
Having done that, I launched Firefox by typing:
firefox &
on the command line. The ampersand is used in order to send the process to the background and keep the console usable.
Searching for KDE startup problems, I found the this post, explaining how to restart your KDE plasma desktop. I tried running "kstart plasma-desktop" and it gave me the following error:

"Application plasma-desktop could not be found using service org.kde.plasma-
desktop and path /MainApplication."
Searching for this message landed me on the KDE forum, where this post examined a similar problem.

After reading it for a while, I concluded that I am most likely missing the plasma-desktop for some reason, so I decided to reinstall it using:

sudo apt-get install plasma-desktop
While the reinstallation was executing I noticed that the MySQL server and client packages got installed again. It turned out that plasma needs MySQL to function and I have somehow confirmed the removal of plasma while removing MySQL.

After the reinstallation I rebooted from the command line and KDE Plasma started normally.

The takeaway from the story: Never do important reconfiguration without giving it a second thought and don't rely too much on the modern good looking Kubuntu tooling to save you from doing something stupid.

Monday, October 4, 2010

205/55 R16 91H detailed winter tires comparison

From the ADAC 2009 winter tire test I picked 3 different models, for which I decided to drill down into the detailed test results in order to pick up one. Here is the spreadsheet that shows the detailed characteristics (translated to English).

Nokian looks like the best choice in its price segment (around EUR 85 over here), but it has worse characteristics on wet conditions, especially aquaplaning. From the rest two, Dunlop is better on ice whereas Continental is better on snow, fuel economy and noise.

I decided to go for the Dunlop. Over here they can be purchsed for around EUR 110 a pice (tyre exchange included)...

Update 1:
I've got the tires. I had to wait a couple of weeks because the dealer only had the T speed index in stock. The tires costed me EUR 97 a piece (tyre exchange, bags for the old ones etc included). We still haven't got snow or ice so I can test them well. On dry and wet roads they feel great. Not very noisy too.

I'll post more when I've tested the tyres on snow/ice...
Update 2:
We had plenty of snow and ice recently and I've been able to test the Dunlops. I was amazed how accurately the ADAC test described them. Indeed the tyres have excellent grip on snow and especially on ice. With my previous car I had Michelin Alpin A3 and I had problems parkin on my parking spot when there was ice. With the Dunlops I never had this problem. With respect to fuel economy the ADAC test is again right unfortunately :). My summer tires are Michelin Primacy HP at exactly the same size. With the Dunlops I had about 0.5 L/100 km more fuel consumption (measured on a 470 km journey). Since I don't drive that much in the winter (around 3000 km) this is not a problem for me. However, if you do a lot of mileage in the winter and don't have that much ice as we do, you might want to do the math and see if Continental will be a better choice.
Overall, great job Dunlop :)

Tuesday, April 6, 2010 and JARURLConnection may cause file handle leaks

A couple of days ago I noticed that there was often a problem during the automatic redeployment of our web application to the central Apache Tomcat server. After some investigation it turned out that there was a file handle leak preventing Tomcat from redeploying the application:

INFO: Undeploying context [/myapp]
Apr 6, 2010 10:08:39 AM org.apache.catalina.startup.ExpandWar deleteDir
SEVERE: [C:\apache-tomcat\webapps\myapp\WEB-INF\lib] could not be completely deleted. The presence of the remaining files may cause problems

The result is that after redeployment the directory with the exploded app contains only WEB-INF\lib with one jar file inside it (the leaking one). In order to find out the cause of the leak I analyzed the application with JPicus. The result was the following

It showed that the root cause of the problem was the call to java.netURL.openStream which in turn delegated the call to a JarURLConnection.

URLConnection and its subclasses may open resources like files and sockets and keep them open "for a while" to improve efficiency. Some attempt to address this issue has been expressed in the response to this bug report. It suggests that getting a stream from the connection and closing it will close all underlying resources. This behavior is not explicitly stated in the javadoc and is hence JVM implementation specific. Even with OpenJDK, it is very likely to have resource leaks if the URL is referencing an entry in the jar file. For example, this code snippet, when executed on Java HotSpot(TM) Client VM 1.6. will keep the file open even though the stream is closed :

URL url = new URL("jar:file:///C:/Users/user/my.jar!/");
InputStream in = url.openStream();

According to the javadoc, this behavior can be controlled by the property "useCaches" which probably defaults to "true". Unfortunately, in my case, this was not under my control, because the call to had been made indirectly by log4j Logger.getLogger.

In case the URL is not pointing to an entry in a jar file, closing the stream might actually help. For example, if Log4j is reading its properties file from the WEB-INF/classes directory, and closes the stream after that, there shall be no leaks. This, however, takes us to my next discovery. It appears that log4j has been handling the connection streams incorrectly until version 1.2.15, where this has been fixed. If your is not in a jar file, you may benefit from upgrading to version 1.2.15 or above.

It might be dangerous to use URL.openStream or URLConnection classes in an environment, where resource leaks are not tolerable. If you do so, you must take care to close all streams as soon as possible (applies to every other stream in general). In case you use URLs that refer to entries in a jar or a zip file, it is very likely that the underlying files will not be closed even though you have closed all streams properly. Then you are left at the mercy of the garbage collector to close the file upon finalization.

Note: When trying to run JPicus agent, the -javaagent option did not work when running Tomcat as a service. The workaround was to start Tomcat from the command line to perform the analysis.

Monday, March 15, 2010

DLink DIR-615 Review

Having used this router for almost an year I decided to share my personal impression about it.

Things that I like/use:
  1. Price - it was one of the cheapest wireless-N routers on the market at that time (around 50 EUR).
  2. Easy to configure - most of the things are obvious once you log in
  3. Relatively good range - the router has two antennas and while some people argue that three antennas are better, I would say that for a small house or a big apartment the range is satisfactory
  4. Wireless security - WPA2 + ability to filter users by MAC address gives you a reasonable security
  5. DHCP reservation - lets you assign IP addresses per MAC address, so that your computers always get the same IP address. This lets you configure some network ports or forwarding more easily then
Things that I don't like very much
  1. Wireless throughput - I've been only able to get marginal speeds around 1MB/s when copying files between two computers. It probably depends on the file sharing protocol (NFS in this case) because bittorrent was able to reach a speed around 3.5 MB/s over the wireless.
  2. DNS Relay - this one is the nastiest problem that I had. This option is enabled by default. The idea is to minimize the number of DNS requests to your ISP's DSN server by caching DSN mappings in the router for a while. In order to do this, the router assigns itself as a DSN server to all the hosts on the network that are configured with DHCP (e.g. The problem with this is that the built in DNS server stops responding to DNS requests after a while and all applications like browsers etc stop working. I figured that our issuing several dig @server commands, replacing the server with the router and my ISP's DNS server. While the latter was always working fine, the former was getting stuck after a while. The first workaround that came to mind was to configure the hosts on the network statically, so they don't use the router as a DNS server. While it worked, it requires manual effort for each new host on the network and makes it more complicated to configure hosts that are roaming around different networks. I thought I had given up on this one until I found the option to turn off the DSN relay in the SETUP > Lan setup menu. There is a checkbox that you have to uncheck. Then the router stops assigning itself as a DNS server and starts assigning the ISP's DNS server to the hosts configured by DHCP. I only tested this when the IP address from my ISP is assigned statically. I am not sure if it works with DHCP or PPPOE ISPs.
  3. Support from DLink - I filed a ticket for this problem, but never got a meaningful response

If it wasn't the DNS problem I would say that overall I am satisfied with the router.