A site devoted to discussing techniques that promote quality and ethical practices in software development.

Wednesday, December 26, 2012

Cavaet in using KGpg in Ubuntu

After I have discovered KGpg as a suitable replacement of GPA, I ran into some rather strange now-you-see-it-and-now-you-don't-see-it situation.

I have also discovered many posts complaining that KGpg does not appear to run. KGpg is a KDE application and it turns out that after the first execution, the default behaviour in any subsequent invocation is to minimise to system tray.

System Tray in Ubuntu is different from that in KUbuntu and its subsequent default start up mode gives the illusion that the program fails to run. It is running OK except that it has been minimised to system tray.

Hence after configuring your KGpg for the first time, you will not see it any more in Ubuntu. Not even if you reinstall it.

This is because KGpg writes the settings to this file:
~/.kde/share/config/kgpgrc

and uninstallation does not remove this file.

If you invoke KGpg in Ubuntu and does not see its window showing up, do this:

1) Use System Monitor to terminate the KGpg process, if it is running.

2) In the Terminal, type:
gedit  ~/.kde/share/config/kgpgrc

to edit the configuration file

3) In the [User Interface] section add the following line:
systray_icon=false

4) Save the file and restart Kgpg.

During the first installation of KGpg, you can change the default not to use system tray icon in Settings > Config KGpg > Misc > Applet & Menus. If you have forgotten to do it in that phrase, you then need to edit the configuration settings manually as shown above.


Tuesday, December 25, 2012

Using a second disk drive in Ubuntu and auto-mounting it

Recently, I added a second disk (virtual) to my Ubuntu 12.04 and now I want to make that disk auto-mount when I log on.

When you create additional disk drive using the "Disk Utility" tool, it adds that volume name to Nautilus. Depending on how one prepares that disk drive or partition, you may have trouble making it auto-mount; you can open Nautilus and click onto that drive to mount it. But it will not auto-mount.

There are several ways to make that partition auto-mount. However, most of these only works when the drive has a partition.

It turns out that in Linux, when you prepare a disk drive, it is not necessary to have a partition of sort. You can simply have a volume formatted with the right type and you can use it.

If you do not have a partition on that drive like this one:
Program such as Pysdm will not be able to configure it.

You can use udisks to mount the device on per-users basis by defining it in the "Start up Applications", like this:

/usr/bin/udisks --mount /dev/disk/by-uuid/4746946c-1cb1-404a-a74b-4a59cb248df2

This is the technique that I have chosen as I am happy to auto-mount it on per-users basis.

If you have a need to auto-mount a disk drive on a system-wide basis, make sure when you prepare that disk to:
  1. Create a partition
  2. Format that partition and assigned a Volume name to it
  3. When you use Pysdm, it will report that this device has not been configured and proceed to configure it.
  4. The name for this partition is actually the mount point name and make sure you use the Volume name otherwise it will create two entries in the /media directory when this device is mounted. This can be confusing.
Once Pysdm has configured this device, you can choose to auto-mount it and it will be mounted with the start of the system.

In search of GPA replacement in Ubuntu continues ...

Since Ubuntu 11.10 onwards have declared GPA not supported, I have been in search of ways to use GPA and have found several schemes:
  1. Running old GPA at with root access and 
  2. Emacs with its plug-in 
Now, I believe I have found a third one which provides a very convenient GUI facility to encrypt and decrypt clipboard data as well as files and is also supported in Ubuntu 12.04.

Testings with KGpg, one of the front ends listed in GnuPG that is available for installation via Ubuntu Software Centre, indicate that it lives up to the expectations and does not required to use any tricks, such as root access or via an editor.

It is just a front end to the GnuPG installation standard to Ubuntu. Hence you can use "Passwords and Keys" to manage the keys or to use KGpg's user-interface. They share the same private and public key stores.

So far I am very happy with this front end and hopefully it will be supported in future version of Ubuntu.

Wednesday, December 12, 2012

Using Gpg in Ubuntu 12.04 - replacement of GPA

Ever since I migrated from Ubuntu 11.04 upwards, I have lost the use of GPA and can only use it with super user privilege.

While that works very nicely it creates the ~/.gnupg directory with root ownership preventing other gnupg program, such as Seahorse, from accessing the keys. Seahorse does not even report access problem. It simply does not work well leaving the user oblivious of the underlying problem.

In the past, one can rely on the Encryption plugin for gedit to perform the encryption and decryption operation. But that is not supported in Ubuntu 12.04.

In search for some graphical means to encrypt and decrypt data on clipboard, I believe I have managed to connect all the dots to form a feasible solution. Here are the ingredients:
  1. Instead Seahorse (I used Synaptic Package Manager)
  2. Once installed, use Seahorse to import your private and your friends' keys.
  3. Make sure gnupg2 and gnupg-agent are installed.
  4. Install Emacs - either from the Ubuntu Software Centre
  5. Use Synaptic Package Manager to install easypg, an Emacs plug-in or from here.
Before one uses easypg, one needs to be aware that there is reported potential vulnerability. The combination of easypg and gnupg also throw up some curly configuration arrangement on caching the passphrase.

If you are decrypting material for the first time, the program will prompt for the passphrase and on its graphical dialog box, you can use Details link to open up the dialog box to reveal treatments of the passphrase allowing you to specify how long the passphrase can be discarded. Unfortunately there is no option to discard once used.

Now you are ready to perform the GnuPG operations using Emacs.

For those new to Emacs, here is a brief introduction materials on how best to use Emacs for encryption and decryption of materials via the clipboard:
  1. Open Emacs
  2. Use Buffers | *scratch* to switch into the scratch buffer so that what you type or paste into does not get saved into a file. In Emacs even if you do not explicitly save a file, the content is written to a form of temporary file whose name formed from the given file name. Something you want to avoid.
  3. Type your plain text or paste the cipher text into the buffer.
  4. Select the entire text block. (Use Edit | Select all).
  5. Then use Tools | Encryption/Decryption to bring up the sub-menu.
  6. Select Encrypt Region or Decrypt Region depending on your need at this time.
  7. If it is to encrypt the region, it will open up a split window on the bottom showing you brief instructions and a list of keys managed by Seahorse.
  8. Use Tab key to navigate to the key list, then press m to select and u to deselect the key.
  9. Once the keys are selected, click OK to perform the encryption and follow the prompt.
  10. If it is to decryption, it will prompt you for your passphrase and then it will cache it.

Monday, November 26, 2012

Modelio fixes the problem in implementing interfaces


The problem as reported in implementing an interface's properties has now been fixed. You can simply download the mdac (ModelingWizard_4.0.09.jmdac).

Monday, November 12, 2012

Modelling Template Method Pattern in Modelio UML

According to this post, Modelio (ver 2.2.0) has a very narrow definition of a type of class that can implement an interface.

For example, according the Modelio, you cannot use its "Modeling Wizard > Implement interfaces properties" to implement the interface method in Foo, because it is an abstract class:


I have yet to meet a programming language that places such a restriction. In C#, I have frequently implements an interface with an abstract class by implementing the interface methods as abstract methods.

You can still use Modelio (ver 2.2.0) to model the application of say Template Method pattern which often done with an abstract class like this

But you have to do some extra work:
  1. If Foo is an abstract class, change it momentarily to non-Abstract class
  2. Then use "Modeling Wizard > Implement Interfaces properties" feature to implement in interface method(s) in Foo
  3. Change Foo back to abstract class
  4. Add additional abstract method, including changing Bar() to abstract if that is required.
  5. To implement the abstract Bar2Impl() in MyFoo class, Modelio does not have a convenient way like that in BoUML. However, you can use copy-and-paste the abstract method into the implementation class and changing it to non-Abstract method.
Do not use "Delete Interfaces Properties Implementations" feature in ver 2.2.0 as it has a bug that deletes all methods in that class.

Sunday, November 11, 2012

Slaying AVG2012's False Positive identification of my program as Luhe.Fiha.B

After I upgraded to AVG2012 some times ago, it has been causing me grieves when it picks on my C# program as one infected with Luhe.Fiha.B virus/trojan.

It is a simple program to perform date calculations and nothing sophisticated.

I requested the source out from the repository and rebuilt it from ground up. Still, AVG continued to pick on me. I am convinced that I am right and AVG is wrong. So finally, I come to a head to prove AVG is deadly wrong and dumb - a view I have held for a long long time for Anti-Virus programs, not just AVG.

Finally I have managed to prove beyond doubt AVG2012 is just another one of those dumb scanners lacking any form of intelligence.

Their false negative is caused by 2 BMP images in the application's icon list which are not even used. Once I removed these two 'offending' icons from the image list, AVG fails to raise the red flag.

If you have a similar experience, try to remove artifacts from your .Net projects. If AVG is interested to get hold of this ico file I can e-mail you. Just contact me.

However, by itself, AVG fails to identify it as Lihu.Fiha.B infected even when I ask AVG to scan it. It has to be used as the application ico in the C# project file for it to trigger the rage of AVG.

Saturday, November 10, 2012

Crazy idea - Razer mouse and cloud

It seems a day goes pass without some one becomes seduced by the cloud - just for something to store things is enough to be called cloud, totally crazy.

Now Razer Mouse has to be registered with its Cloud activation server before its full capability can be used.

I totally agree and sympathise with the author of this post in lambasting the Razer for their crazy unfriendly scheme.

Razer's Ming-Liang Tan's justification is every more questionable:
Instead of having mouse settings limited by the space in onboard memory, Synapse 2.0 allows gamers to now have almost unlimited space for their profiles and macros.
Do they really need unlimited space? Perhaps Razer should publish some survey or statistics to convince doubters. What's wrong with such thing as USB drive. How much memory are you talking about? 16G USB memory drive is as common as mouse attached to machines.

Why not have a SD Card slot into your Razer USB drive that the user can stash their settings into that.

While Razer is free to request their user to register but should not coerce, what I have found the order in the process some what unfriendly and draconian. Accord to Razer
Once registered, Synapse 2.0 works offline and never needs to be online again. So basically, a user creates an account, saves initial settings, and if there's no internet connection, it doesn't matter - settings are saved on the client PC and are not synced to the cloud. Synapse 2.0 works offline. 
If I have bought one of these mice, I will definitely be annoyed - what the hell Razer has a right to demand my e-mail address in order for me to use the full capability of their product. Why should I have to register & activated?

I guess by now their cloud server is full of fake information and disposable e-mail addresses for people just simply eager to dismiss the pest. I do that all the time after all it is a virtual world and honesty has not place.

Perhaps their developers have not considered their users' feelings properly. A much more user- friendly way is to reverse the order of the above mentioned process. That is:
  • Let the user on to the settings program - regardless. They are already your customer and you have their money, mind you!
  • On saving (I hope Razer has a way to save the settings) offer the user a choice, like all good user caring program does.
  • If they choose to register, they have the benefits blah blah blah - do your Razer advertising to convince your user to go your cloud way - but don't coerce.
  • Or the user can choose to store the settings locally on their USB drive etc.
I have to say, your current implementation is too draconian - this is the way take it or leave. Not nice and deserves the user's wrath.

Saturday, October 20, 2012

OFX Encoding bug - This time from CBA

Further to my discover of a financial institution failing to generate OFX standard compliant OFX file, while I was helping a friend to download bank statements from the Commonwealth Bank of Australia's Netbank facility, I noticed something not quite right.

One would think CBA being one of the largest banks in Australia would have hired the most competent developers to code their download statement in OFX format. Well it seems their developers have failed the test of standard software development practice producing ridiculous results, just like others.

In CBA's case, it seems that they have not read section 3.2.8.2 of OFX 1.02 specification for representing datetime

In 4 files out of 13 I downloaded, in a space of a day, the elements <SONRS><DTSERVER>, <LEDGERBAL><DTASOF> and <AVAILBAL><DTASOF> containing ridiculous dates and here are the figures:
File-01:
20120937021837
20120937021837
20120937021837

File-02:
20120945030545
20120945030545
20120945030545

File-03:
20120950015650
20120950015650
20120950015650

File-04:
20120948021348
20120948021348
20120948021348
According to the standard, section 3.2.8.2, which says:
Tags specified as type date or datetime and generally starting with the letters “DT” accept a fully formatted date-time-timezone string. For example, “19961005132200.124[-5:EST]” represents October 5, 1996, at 1:22 and 124 milliseconds p.m., in Eastern Standard Time. This is the same as 6:22 p.m. Greenwich Mean Time (GMT).

Date and datetime also accept values with fields omitted from the right. They assume the following defaults if a field is missing:
Note that times zones are specified by an offset and optionally, a time zone name. The offset defines the time zone.

Take care when specifying an ending date without a time. If the last transaction returned for a bank statement download was Jan 5 1996 10:46 a.m. and if the was given as just Jan 5, the transactions on Jan 5 would be resent. If results are available only daily, then just using dates and not times will work correctly.

NOTE: Open Financial Exchange does not require servers or clients to use the full precision specified. However, they are REQUIRED to accept any of these forms without complaint.
Some services extend the general notion of a date by adding special values, such as “TODAY.” These special values are called “smart dates.” Specific requests indicate when to use these extra values, and list the tag as having a special data type.
So how can any developer possibly produce 37, 45, 50 and 48 days in September? Do they actually test their code at all? I wonder if these people has actually read the standard before coding?

Thankfully GnuCash does not use these elements otherwise it will prevent the download data from being imported into GnuCash. 


Friday, October 5, 2012

Microsoft's System.Xml.Serialization.XmlSerializer generates uncompilable code for some xs:double default values

Consider the following valid schema:
<xs:schema id="MyDemoSchema"
    targetNamespace="http://tempuri.org/MyDemoSchema.xsd"
    elementFormDefault="qualified"
    xmlns="http://tempuri.org/MyDemoSchema.xsd"
    xmlns:mstns="http://tempuri.org/MyDemoSchema.xsd"
    xmlns:xs="http://www.w3.org/2001/XMLSchema"
>
  <xs:complexType name="Foo">
    <xs:sequence>
      <xs:element name="Name" type="xs:string" minOccurs="0" maxOccurs="1" />
      <xs:element name="SomeNumber" type="xs:double" 
                  minOccurs="0" maxOccurs="1" 
                  default="NaN" />
    </xs:sequence>
  </xs:complexType>

  <xs:element name="Foo" type="Foo" />
</xs:schema>

The minute you execute the following piece of code:
Foo f = new Foo();
XmlSerializer ser = new XmlSerializer( typeof(Foo) ); // ---- Error CS0103 ---

will generate a CS0103 compilation error. This is caused by the CLR runtime in which it will at that moment generates a temporary serialization assembly that actually performs the serialization and deserialization process.

It is the code generation process that fails to handle W3C acceptable xs:double values such as NaN, INF and -INF producing uncompilable code.

In brief, but will provide a more detail expose below, is that it generates code like this:
   if( x != NaN ) { < --- causes CS0103
     // do something 
   }

Which naturally does not compile and it generates the same error message as when the XmlSerializer is instantiated. You can't even use the logical operator to test if the value of System.Double is Double.NaN.

This is the exact code fragment of the generated code causing the CS0103:
if (((global::System.Double)o.@SomeNumber) != NaN) {
    WriteElementStringRaw(@"SomeNumber", @"http://tempuri.org/MyDemoSchema.xsd", 
    System.Xml.XmlConvert.ToString((global::System.Double)((global::System.Double)o.@SomeNumber)));
}

There is no way this line can compile. It has to be corrected like this:
if ( !System.Double.IsNaN ((global::System.Double)o.@SomeNumber)  ) {
    WriteElementStringRaw(@"SomeNumber", @"http://tempuri.org/MyDemoSchema.xsd", 
    System.Xml.XmlConvert.ToString((global::System.Double)((global::System.Double)o.@SomeNumber)));
}

Detail Discussion and work around

To see how this manifested into a runtime error, you can use sgen.exe together with the /keep switch to look at what happen. The catch-22 is that if your schema's default value is NaN, sgen.exe will fail to generate the serialization dll because the generation process generates the above mentioned uncompilable code and you can't see the generated code.

So in order to see what is happening, you replace the default value with a double that you can recognised. In my experiment, I used -5.55.

Once your schema is modified, xsd.exe is used to generate C# classes that are bundled into an assembly, you can then use sgen.exe with the /keep switch to generate the serialization dll together with the source file. It also generates a file with the extension .cmdline which is essentially the response file for CSC.

Using our default value as search string, you can quickly locate the potentially offending piece of code. Below are the steps to prove that code generated by the serialization process is faulty and how to rectify it.

Now correct the line where it checks for -5.55 to using System.Double.IsNaN() as shown above.

The next step is to rebuild the serialization dll and to do that you need to modify the schema to change the default value back to NaN.

Then you use XSD to generate the class and rebuild your assembly.

Next you modify the cmdline file as follows:
  • remove the setting where it references System.Xml.dll to avoid duplicate reference compilation error. 
  • remove the /D: _DYNAMIC_XMLSERIALIZER_COMPILATION 
  • to aid experimentation change the /debug- to /debug.
Check the response file for the location of the CS file. You should execute this response file in that directory. For example if the source file is like this "OutTestDir\5fm3rmet.0.cs", you should run your CSC from the parent directory of "OutTestDir".

Next is to compile and generate the serialization dll which by now should have the offending line corrected using Double.IsNaN().

Next you modify the project for the dll that once called XmlSerializer constructor by
  • adding a reference to the serialization dll. 
  • then replace following piece of code where you instantiate an instance of XmlSerializer:
Foo f = new Foo();
StringBuilder buffer = new StringBuilder();
using( XmlWriter writer = XmlWriter.Create( buffer, null ) )
{
   XmlSerializer ser = new XmlSerializer( typeof(Foo) ); // <-- to be replaced
   ser.Serialize( writer, f );
}

With this:
Foo f = new Foo();
StringBuilder buffer = new StringBuilder();
using( XmlWriter writer = XmlWriter.Create( buffer, null ) )
{
   Microsoft.Xml.Serialization.GeneratedAssembly.FooSerializer ser = 
       new Microsoft.Xml.Serialization.GeneratedAssembly.FooSerializer();
   // XmlSerializer ser = new XmlSerializer( typeof(Foo) ); // <-- to be replaced
   ser.Serialize( writer, f );
}

If you have the PDB, you can easily step through the code to convince yourself that your fix indeed does work.

Let's hope Microsoft will fix this bug which is in .Net 4. May be .Net 5 has fixed this?

Monday, October 1, 2012

Android 'smartphone' cannot create bi-weekly events

Further to my reporting on this issue on my Samsung Galaxy Note, I went to dig a bit deeper to see whose fault it is - Samsung or its underlying operating system Android.

Just to recap, here are the only options offered to me when I try to create a repeatable events in Galaxy Note:

What is so difficult with 'Bi-weekly' - 2 x 7 days interval?

It turns out that this defect appears to be caused by the underlying operating system - Android - and is found in HTC, Samsung Nexus, Samsung S3 and Samsung Galaxy Note. The only thing that is common amount them is the Android operating system.

Anyone with an Android phone can try their hands to see if they can create an event that repeats every 2 weeks on the Calendar/Appointment manager that is shipped with their machine.

It begs the question why these developers and product managers that oversee their development could be so ignorant of this basic business requirement. Have they not used any other calendar programs in their life? Or are they so arrogant that they just want to be different for the sake of being different?

Perhaps they are so insular from living their entire life in Google Calendar, which could do this, that they do not understand that there are users out there that do not live like them? If that is the case, it is worth them to remember this simple phase: "There are more users than developers"

Well Apple has that covered in iOS, which is more business friendly. Perhaps Apple should run a TV advertisement much like the Mac Vs PC version to highlight Android phone's failure to support this basic business requirement.

Monday, September 17, 2012

INGDirect fails to generate OFX 1.0.2 compliant document

The Australia online bank www.ingdirect.com.au seems to have a good track record of not testing if their generated download files conforming to the correct specification.

Now it is failing to generate OFX 1.0.2 compliant data file for the online customer.

It seems their developer has trouble understanding this DTD specification:

<!ELEMENT STMTRS - - (CURDEF , BANKACCTFROM , BANKTRANLIST? , LEDGERBAL , AVAILBAL? , MKTGINFO?)>

As a result, their OFX document does not contain BANKACCTFROM and LEDGERBAL elements. When GnuCash tries to import this malfomed document, GnuCash fails to recognize any valid transactions in the BANKTRANLIST element. KMyMoney reports that 'No Accounts found' error.

Have they attempted to import their test document into any accounting packages?

It is a worry when developers in a financial institution cannot even understand the basic DTD in the Open Financial eXchange specification. If they can't write an OFX compliance library, perhaps they should download this LibOFX library.

Monday, September 3, 2012

Apple vs Samsung - is it a case of trying to patent a door knob?

I have been following this Apple vs Samsung patent violation case with total amazement. I am not a patent lawyer but a software developer and I own a Samsung Note as an act of defiant allowing me to flash such a device in Apple Store just to annoy them showing off the cool features not found in Apple devices.

I fully agree with this assessment that Apple should lose on appeal. To the US Apple fanbois, this may be a shock but if you step across the Pacific Ocean, such expectation is not wishful thinking but already a reality. In Japan, the court threw the case out and in Korea their court has found Samsung did not copy. In Australia Apple lost on trying to ban Galaxy Tab 10.1. S3 are selling like hot cakes.

I am a late comer to using the so called smart phone but not a stranger to this kind of mobile devices judging from the collection of PDA of different vintage in my drawers.

Recently when I was holidaying in HK, I did an unscientific survey of what kind of smart phones are used. What's better place than in the HK MTR to make my observation. To my astonishment very few brought out any iPhone! The number of non-Apple devices easily out number Apple device at least 2:1. This was before the verdict of the Apple vs Samsung case. There are so many people using Note, particularly for watching videos. iPhone's screen is a midget by comparison.

The result of this survey is not too far off the mark from market report on the smart phone market in China. The Android phones market in that part of the world is being placed as commanding 60% of the market.

From my observation Apple must be very desperate in clinging to this 'Pinch to Zoom' which is like patenting the door knob action. I wonder if anyone has patented on that.

This gesture is not so unique to Apple. The Fujitsu Windows 7 tablet I was testing has already have gesture like swipe, pinch, stylus operation, mouse and keyboard operations.

I grow up with PC from the CPM/DOS day through the day when Apple Vs Microsoft, which thankfully failed to stop the march of Windows GUI allowing the whole family of GUI operating systems to flourish. Apple has been litigious since those days. A journey through decades of evolution of GUI operating systems, from highly proprietary to open source, clearly shows Apple's claim that patents are essential artefacts to encourage development is false and just delusional.

From what this layman's eye, Apple is more interested in using litigation to stop competition and an attempt to monopolize the smart phone market and their latest antics supports such a claim. As a technical people, I have found their product only technically on par with others and in some situations worse. Perhaps Apple's Ads has some truth in it.

If anyone can't buy Samsung of the model of your choice in US, there is the Internet coming to your aid. While Apple may win the court battle but it will lose the war. Why?

Scarcity is what makes an item more valuable and desirable, not just smart phone. There is no bragging right for whipping out a common toast-thick iPhone comparing to Apple-induced-banned slick and sexy wafer-thin S3 or even my Note. US has a good history of prohibition failure.

If the market for Samsung in US is blocked by whatever means, the mobility of today's population and the Internet will provide a teaser to entice people to seek out those hard to get items to show off. I don't think Google will be too willing to block search result for 'buying Samsung S3 in US'. You can also go to Alibaba or to create business opportunity for a reverse Shipito, which has defeated those US online merchants using geo-blocking technique to protect their distribution channels.

Anyone doubting that should consider this real-life situation created entirely by Apple's marketing machine. When Apple iPhone 4s was released, Apple thought it could whipped up more hype by staging their release world wide. Hence they deliberately delayed the release in HK, a place where people love to be holding hard to get gadgets, and not even announcing the release date to create more mystery and hype.

With the advent of the Internet, people in HK jumped on the Internet and ordered iPhone 4s from Australia, a place not contaminated by artificial whipped up super-hyped marketing technique, delivered to local address and then couriered to HK. iPhone 4s was so abundant in Australia when it was released that anyone could walk in and picked them up. I personally know several people shipped iPhine 4s and the DHL was doing a roaring trade. People are now gearing up to take order for iPhone 5 should Apple uses the same marketing hype again.

The comical part was that these devices came from China and was then ship back over the border to HK, China's SAR.

Witnessing the failure of this kind of strategy, which attempted to create scarcity, Apple quickly announced the release date and the price and desirability of iPhone 4s in that place vaporised as quickly as a puff of smoke.

The reverse applied to Australia. When Apple managed to temporarily secure a ban on the sales of Samsung Tab 10.1, the interest on this device shot up and people simply went online to HK or elsewhere to import them. This only hurt those local companies selling both Samsung and Apple devices. Samsung Tab 10.1 was not renounced as the most brilliant tablet but with of the help from Apple, its desirability shot to the top of the bunch and now is so well known that is synonymous with iPad. This must have saved Samsung lots of advertising money.

The same will applies to S3 and Note and other great devices yet to come. There will be good business opportunity for shipping to US Samsung S3, Note, Tablet and others devices that Apple wanted to ban. The victory, if any, would be hollow and pure academic.

That is the reality and it is better for Apple to learn to live with others saving lots of legal expenses. In so doing will earn admiration for its prowess for marketing a device in fiercely competitive environment rather than to beating other into submission through legal muscles on something as silly as trying to patent a door knob. At the moment, it has earned the dubious title as most litigious and angry company.

Door knobs are door knobs, you can't and shouldn't be allowed to patent that. All the best to Samsung and thanks for such a great device as the Note!

Thursday, August 30, 2012

Experience in searching for free UML tool, continue.

Since my last post on the search of UML tools, I have added Gaphor to the list.

I am surprise so many of these so called UML tools are not really doing UML at all. UML is a very precisely specified modelling language and yet these tools at best are nothing more than glorified drawing tools.

Some like Dia does not inter-relate the diagrams and artefacts. Gaphor does not even have anything to specify if an operation is abstract or virtual.

None of them (the free ones) provides any way for a derived class to override a base class' virtual function. Or automatically implementing an interface without laboriously defining them yourself.

In Umbrello and Modelio I have to use copy-and-paste. Because the Umbrello UI is more useful than Modelio, the operation is not too bad for small size class/interface. Umbrello does not seems to have any syntax for an operation to return an array.

Modelio has something that resembles allowing user to override but half baked and I am not sure Modelio's calling it 'redefines' the same as override?

Many of these tools are not using standard vocabulary as prescribed in UML other than the visual part. Why not?

After using Modelio and Umbrello to develop a fairly large model to test these two that are better than the rest of the crops, it is hard to say one is significantly better than the other.

I must say Modelio is a very clumsy program to use and is very frustrating doing a quick modelling. It is one that seems to impose its own vocabulary rather than using UML standard.

Umbrello is much easier and quicker to operate but drawing the sequence diagram is a real challenge of one's patience and tolerance of its quirky operations. Modelio handles this relatively better and with more support.

If Modelio has a more convenient way to allows the derive class quickly to override base class operations, then it is definitely a winner. Interestingly this feature is found in Microsoft's Visio and the once free BoUML.

The search for the idea free UML tool continues ....


Friday, June 22, 2012

Decoding WinMail.dat

Recently I have been receiving e-mail message with attachment called WinMail.dat and is puzzled to find out how to decode it.

I am happy to report that I have found a free Windows Tool called "WinMail Opener". Good work guys! Many thanks for this great tool.

It is indeed very small and experimentation indicates that I can zip (or XCopy)  the installed version and run it elsewhere without the need to go through installation.

Tuesday, June 19, 2012

We need bi-weekely repeat of events in Samsung Note's Planner

I have found this missing rather disturbing and annoying.

This question has been posted in 2010 and is so annoyingly missing in my Samsung Galaxy Note running Android version 4.0.3.

Bi-weekly events happen more often than most people (read Android developers) thinks. It is available in iPhone, in Outlook for a loooooong time, my Thunderbird with Lightning plug-in can do it, my Windows Live Mail's calendar can do it. So Microsoft can do it but Google can't do it?

But why Android developers can't implement this feature, which to many is essential. The fact that this support is available in the above mentioned list of appointment & planner clearly indicates that this is not such a difficult programming exercise.

The only conclusion that I can draw is that this is another case of developers wanting to do something different rather than something essential. As a result it makes your software sucks.

Failing to support basic requirement is not cool and not smart at all. Your failure to include this kind of essential features turns a smartphone a dumb phone. Now I have to put all my bi-weekly events on my Thinderbird on my laptop despite the availability of thousands of Android Apps out there! It is a joke.

Talking about Calendar, on another tablet running Android 3.2, I fail to understand why defining an event on the Calendar apps running on that tablet requiring me to open and log into a Google Account? I do not need to do so in my Windows Live Mail, Outlook, or Thunderbird.

To put Calendar event on Google's Calendar should be the decision of the owner of that tablet/device and has nothing to do with Google's imposition. At least in my Android 4.0.3, I do not have to be tethered to a Google Account.

Sunday, May 13, 2012

Running Gnu Privacy Assistant in Ubuntu 12.04

Ever since the release of Ubuntu 11.10, GPA (Gnu Privacy Assistant) has been removed from the repository because it crashes in Linux/Ubuntu.

This has held me back from upgrading to the newer version of Ubuntu. As they say necessity is the mother of all invention except that in this case the invention is about finding a work around to this annoying problem. The good news is that the work around is simpler than I thought.

It appears the crash when running the Natty version of GPA is caused by some kind of security issue or restriction. Here is the work around:
  1. Download and install a copy of GPA (0.9.0-1).
  2. After successful installation, you can run GPA by launching it from the Terminal as follows:
    sudo gpa
This should allow you to operate GPA normally. Just make sure you do not launch it from the Ubuntu launcher.

Friday, April 27, 2012

In search of free UML tools

Ever since BoUML has gone commercial and that I have stupidly installed the upgrade, I have lost the use of a very capable UML tool. Hence I have to embark on a search of free UML tools that primarily run in Ubuntu. I am not particularly concerned if they do not work in Windows at all.

As part of my journey, I revisited my old friends, such as ArgoUML, Dia UML, UMLet. With the help of this Wikipedia page on UML and supplemented with my exploration of Ubuntu's Software Center, I have managed to find a few new ones.

Previously unknown to me are Umbrello UML, Software Idea Modeler, and Modelio, which has an open-source (free) version and a commercial edition.

Many of them are rather skimpy on UML treatments other than drawing the UML artefacts. A proper UML tool is more than just a pretty drawing tool. Dia and UMLet belong to the class that treat each drawing as separate unlinked diagram. In these tools, changing, for example, the class name in the class diagram does not automatically propagate to say the sequence diagram.

As a result, they represents some extremely lightweight UML tool; a shade better than a drawing tool.

Three areas that I am particularly interested in and they are:
  1. Class derivation with the derived class capable in the UML tool to select which base class operations it can override.
  2. Similarly when a class realizes an interface, the tool should offer an automatic means to implement all the interface's methods without one manually duplicating them.
  3. Component can indicate to have provided and/or required interfaces.
While they are not extremely demanding requirements and that they are just basic OO stuff, surprisingly out of the above list of free UML tools, only Modelio can meet my requirements. My investigation surprised me that they tool all have failed to incorporate the most basic OO requirement - derivation and interface implementation. The developers of these tools perhaps to preoccupied with the graphics and fail to focus on the OO modeling requirements.

If you know of how to perform the above 3 requirements in the above mentioned tools or that you have found more competent free UML tools, feel free to submit your comments and recommendations.

Sunday, March 18, 2012

What is more difficult - to think and to do something - follow on

In my previous post, I pondered on this topic and wonder if the experiment in MIT laboratory in studying habit can explain them.

Since then I rediscovered a study [Hadar] published in May 2008 that seems to attempt to answer similar question but more specifically directed to OO design.

Apparently
the formal OO paradigm has come to sometimes clash with the very intuitions that produced it. Thus while objects, classes, and inheritance certainly have an intuitive flavour, their formal version in OOD is different in important ways from their intuitive origins.

Dual-process theory, imported from contemporary cognitive psychology, highlights the underlying mechanism of those situations where our intuitions clash with our more disciplined knowledge and reasoning..... Highly accessible features will influence decisions, while features of low accessibility will largely ignored.

The intuition, as discussed in this paper, is explained based on the Dual-Process theory, credited to D. Kahneman, which states that
our cognition and behavior operate in parallel in two quite different modes, called System 1 (S1) and System 2 (S2), roughly corresponding to our common sense notion of intuitive and analytical thinking.

... S1 processes are characterized as being fast, automatic, effortless, unconscious, and inflexible (difficult to change or overcome). In contrast, S2 processes are slow, conscious, effortful, and relatively flexible. In addition, S2 serves as monitor and critic of the fast automatic responses of S1, with the "authority" to override them when necessary. In many situations, S1 and S2 work in concert, but there are situations ... in which S1 produces quick automatic non-normative responses, while S2 may or may not intervene in is role as monitor and critic.
The paper is worth reading as it describes experiments with experienced developers rather than using students.

I wonder if this explains why it is much easier to get "experienced" people to do work (the S1 process at play or the assisted and influenced by habits) while harder to get someone to think (the S2 process and the need to counter to what habits may lead to)?

I am also wondering if S1 relates to the Accidental Tasks while S2 is to Essential Tasks in software development as elaborated long ago in the "No Silver Bullet" paper by Brooks?



[Hadar] "How intuitive is Object-Oriented Design?" by Irit Hadar and Uri Leron, CACM May 2008/Vol 51 No. 5, pp 41-46

[Brooks] "No Silver Bullet - Essence and Accidents of Software Engineering", Frederick P. Brooks, Jr. IEEE Computer April, 1987. Vol 20 No. 4, pp 10-19

Wednesday, February 29, 2012

What is more difficult - to think or to do something?

I always ponder what is more difficult? To think of solving a problem or dealing with a situation or simply doing something to attack the problem or situation.

I have observed over several years when I worked with developers of different levels and come away with a view that is more difficult to get someone to think (as in an analysis or design). It seems most developers are only too happy to do something whatever comes naturally, which may or may not produce the best solution.

Why is it harder to think? I wonder.

Recently, I come across an article which reports some experiments to study what is habit and how it influences one's action. Perhaps this is the answer, to think requires more mental activities and needs to overcome habits, which may not directly applicable to the task in hand (close but not exactly). Allowing the brain to select the right chunk reduces the brain activities and put oneself into an almost automatic mode.

But like many things in life: it is much easier to form bad habits than good and this increases the chance to apply bad habits than good resulting in inferior performance.

Friday, February 17, 2012

LibreOffice Base 3.5.0 fixes MS Access Connectivity problem

It is nice to see LibreOffice Base version 3.5.0 (Windows Edition) has fixed the MS Access Connectivity Problem reported previously.

Monday, February 13, 2012

Faulty LibreOffice Base MS Access Connectivity

Further to my last message reporting the bug in connecting LibreOffice Base (ver 3.4.5) in Windows to MS Access database, below are screen shot showing the bug.

For demonstrating this bug, I have created an very simple MS Access 2003 database as with a table called ListOfNumbers containing 20 records (numbers from 100 to 119) as shown here:


Having got the MS Access database, I then create a LibreOffice Base using MS Access connectivity as shown here:

 
Opening the ListOfNumbers in LibreOffice shows the missing first record showing only from 101 to 119, reporting only 19 records - one short! Notice the record that is missing.

The record is actually retrieved in the recordset, only not displayed. If you sort the record set in descending order, you will now see the missing last record (value 119) and the first record (value 100) is now displayed, as shown here:

That's the missing record is always the first record. This causes me to suspect that LibreOffice code fails to reconcile the differences in the bases of two different array convention; the Ole Automation Collection starts with 1 while the other more conventional one starts with zero.

Just to prove the MS Access is not faulty, I have created a different LibreOffice Base database but this time using ODBC connectivity with the MS Access ODBC driver and the result shown below reports no missing record.

 
Hence the only conclusion one can draw is a fault MS Access connectivity support from LibreOffice Base.

Thursday, January 19, 2012

Caveat in using LibreOffice Base to access MS Access Database

Microsoft's MS Office (Office XP) hostility to being deployed in a virtual machine has once again driven me to desert MS Office to use LibreOffice. It is not that it cannot be deployed in a VM but when one relocates the VM from one machine to another, MS Office's activation scheme rears its ugly head. Rather than re-activating, which only solves the problem temporarily, I have decided to ditch MS Office.

In so doing I have discovered a rather disturbing bug when trying to access MS Access (ver 2002) database from LibreOffice Base (ver 3.4.5).

In LibreOffice Base (Windows version 3.4.5), you can connect to MS Access directly using LibreOffice's driver or you can connect to MS Access via the MS Access ODBC driver.

The former connection technique seems to have some bug in handling the MS Access database and the latter technique using ODBC driver should be recommended, at least for this version. While the direct connection technique can open the database and access the tables and queries (they are all I have so far tested) producing seemly the right results. But on closer examination, it reveals some disturbing results.

The direct connection technique always reports one record less than what is the actual total number of records in MS Access. LibreOffice Access connector always does not show the first record of the result set. The missing record is always in the record set from the execution of the query. For example if the first column is your sequence number from 1 to whatever in increasing number.

If you sort that column in ascending order, record corresponding to sequence id 1 is not shown and the total number of records reported on the bottom of the view shows one less than what it should be. The record with the highest sequence number is the last record.

If you sort that column in descending order, the record with the highest number of sequence number is missing as it should now be the first record in the grid but now you will see the record corresponding to sequence number 1 as the last record. The total number of records is the same - one less than the actual number.

It does not matter whether you are querying a table directly or executing an Access Query, it always drops the first record of the result set in the grid. It seems the connector has an out-by-one bug or perhaps there are two index bases internally - one base zero and the other base 1 - and the program or driver fails to provide the correct mapping.

Now if you reconnect from Base to the very same database using ODBC connectivity by defining a ODBC Database source using MS Access ODBC driver. The above mentioned idiosyncrasy does not appear.

In conclusion: Do not use LibreOffice Base's MS Access connection driver; use ODBC connection technique to avoid missing record.

Blog Archive