Monday, December 24, 2007

Deploying mobicents services on startup


I had spent about 20 minutes to search in the mobicents documentation to find out the way to make the mobicents deployments permanent. But unfortunately I could not find in the documents which I thought should be a important documentation for any such servers. It might be my mistake to look out the wrong documents or I am afraid, it might have missed in their documents.

But fortunately I didn't spend too much of time to find the way. It is of course an easy and straightforward task. but I wonder how this good procedure made unnoticed and not documented properly. My apologizes to mobicents community if I didn't really look into the correct documentations.

You just have to use a template BSH script to make it working .. thats it.

For example I need my http-servlet-ra to be deployed on server startup and make it permanent.

Here are the steps I followed.

  1. Copy the /resources/http-servlet-ra/*.jars into server/server/all/deploy-mobicents/
  2. Copy the /resources/http-servlet-ra/deploy-http-servlet-ra.bsh into server/server/all/deploy-mobicents/scripts/

Thats it. Your resource adapter will be installed on server startup. You don't have to do ant ra-deploy from ra folder anymore !

Now how about installing and activating our own sbb jars on startup? Simple .. I have a http connector sbb to be deployed and activated upon startup. These are the steps I followed to do so.

  1. Copy the httpconnectorsbb-DU.jar into server/server/all/deploy-mobicents/

  2. Now you have to write a httpconnectorsbb-DU.bsh file and copy in to server/server/all/deploy-mobicents/scripts/

Use the following sample bsh file and modify it for your service. Make sure you have the same name as your DU jar file name.

You may just have to modify these two variables. Add your sbb name, sbb vendor and the version mentioned in your sbb jar xml.

String thirdPCCTriggerURL = "${jboss.server.home.url}deploy-mobicents/ThirdPCCTrigger-DU.jar";
ServiceIDImpl serviceId = new ServiceIDImpl("se.jayway.sip.slee.service.ThirdPCCTrigger-service", "Jayway", "0.1");

Now start your mobicnets server and you will see your RAs and SBBs installed and activated there.

Coool !!!

Wednesday, November 28, 2007

Observer pattern using spring AOP

I published an article on how to implement the observer pattern using spring AOP.

Visit here to have a look !

Friday, November 16, 2007

Pattern Oriented Frameworks - Need for the hour

Got to know about the project in java.net called J2EE pattern Oriented framework Which has built it patterns like DAOs, Service Locators, Adaptors, MVC etc. This is something interesting and I have been dreaming for this for long long time.

What I really wanted is little different approach. The architect should do more than the design. Every chief architect of any company should bring a pattern oriented framework for their company. I doubt we can use the above mentioned java.net project or x-rad . Because if we use a common framework for this kind of things it will often bring confusions by the garbage stuffs in there. But you can use it as a reference and build your own model for your company. That will really help the RAD approach. How many times we have seen dev teams stuck in getting the basic things right ? Plenty. The main reason is that every engineer is reinventing the wheel all the time which would affect the team efficient heavily.

Think about Ruby. It has gone to it's hype (Yes hype ! ) after the arrival of Rails. See Grails for how easy to do a project there. You will spend no time to get the basic setup done, your designs done etc.
So you need a Grails like model for J2EE. You have a build.xml or maven to create the project. Create what ever model you want. I want a MVC setup with spring MVC, DAOs with hibernate, Mysql back end .. woof .. the build will do the rest. It will create the model with basic stuffs. So you only need to consider on your business domain, not the technical domain. Because you will have the reference implementations for your companies flavor already.

Yes it is pretty hard for a service provider, every time you need to changes your technologies, designs, models etc etc. But thats what the industry is all about. Like the maven ibiblio, have a framework ibiblio repository And every time you have a model in place, just do the prototype work and commit to your framework ibiblio repository

So the need for the hour is an architect (framework architect or infrastructure architect) who should design the prototypes with every patters a company use and maintain a ibibilio like repository in the company.

RAD will be RRRRAD then I am sure !

Wednesday, November 14, 2007

Mixed reviews on Google's Android


Recently Ruchith posted about Google phone and it's new SDK. Eventually the released happened and here is a little post on it.
After the initial delightment of the announcement on Android release, people seemed giving mixed reviews on it's Dalvik virtual machine. Impressed with the linux2.8 kernel , libraries and the application stack. But the eyebrows are raised when it comes to the runtime engine Dalvik.

When I saw that the Dalvik's limited support to standard java ( Ok Sun java !) , suddenly GWT came to my mind where GWT supports core java like java.lang , still it is a different JRE and compiler. But it is understandable for GWT as end of the day you create java scripts out of it.

But here Dalvik's attempts to become a standard for mobile devices. Of course we often have enough troubles and limitations with J2ME. So Google would want to have its own standard. But it will take a bit long while to be available and compatible in the market phones.

But IPhone is yet to arrive in Asian market and Google is planning to release its own model, it won't be too long Android dominating the arena.

And since Google's lack of intention to go along with Open JDK on this, I think slowly Google is moving to its own Google JDK. So be ready !!!

Good articles found on Info Q and oreillynet .

Tuesday, November 6, 2007

Google enters into mobile




Google takes a big step forward by moving into mobile and introducing Open Hanset Alliance(OHA) which includes 34 powerful companies such as Qualcomm, Motorola, Samsung, T-Mobile, Sprint, Skype, LG, HTC, KDDI, DoCoMo and China Mobile,etc. Goal of this alliance is to develop an open source operating system for mobile phones, in other words "an alternative to Symbian OS, Windows mobile and iPhone". (I'm sure Apple will think about opening their OS and Symbian too).

Google's mobile OS will be called "Android" (No "Google" word/logo attached at all .... how humble they are .... ) which is an open source OS built on Linux and Java.
Since this mobile OS is available for free of charge for OEM vendors, this will be a major hit for proprietary mobile OS vendors.
Further it will give more flexible environment for mobile application developers.
And I'm sure ......... this will lead to come up with some more mobile OS distro like in linux desktop.

Surprise ........... this Android SDK will be available from Nov 12th, and mobile phones will be available from mid of 2008 [Not sure how fast it going to come here , since it will be launched initially USA, Europe, Japan and China ].

I think this is a great news to mobile application development companies like us, it will eliminate some of the barrier currently we have because of the closeness of the mobile operating systems.

Finally.........mobile application developers are getting more flexibility and more open environment.

Sources :
image from http://www.dailytechrag.com
info from here

Monday, November 5, 2007

commons logging and log4j

Few weeks before there was a mail discussion on getting rid of commons-logging from ws projects and the answers from people were all +1 . I wanted to have a look but I couldn't manage better time to look it.

Yesterday my friend Thayapavan was asking a related question on what's the different between log4j and commons-logging. Interestingly it was asked by a an interviewer which is clear now that people start thinking on why we use these two jars just because they are there !

First question is why we need commons logging? The idea of commons logging must have come from the concept that logging implementation is an integration feature, not a development feature. Developers use commons logging in their code without worrying of what the implementation is. It is the integration people to decide whether to use log4j, or jdk logger or any other proprietary ones.

So I think it is a development choice. You would not like to tie log4j in to your source in case you want to change it in future (example jdk one) . But I think 8 out of 10 times people wouldn't do it. And people who have tasted writing the Appenders and the other fancies what log4j has (specially after log4j 1.2) wouldn't even think of it ! So if you are comfortable using it then go for it instead wrap it with commons-logging. Or want be too smart and depend on integrators to decide then go for commons-logging !

Sunday, September 23, 2007

Google's code review

Just happened to find out on how Google is keen on the code review. Very finite long process. It tells you the value they give to their code reviews. "Review first before commit" is something similar to "Test first code" model . The bottom line is , you got to pay attention to code review a lot thought you may not be using the Perforce or any special tool.

Friday, September 21, 2007

Robert Hanson on my million dollar Q

Robert Hanson writes about the million dollar question I have been asking to myself for long time. Whats the standard practice to learn/adapt new technologies. It is good to learn new technologies. But there are toooooo many new THINGS ! . Like he mentions on MVC, how many MVC I have gone through last year or so ? From simple JSP to Struts, JSF , spring MVC and take GWT if you wish. Sure, I learned these THINGS and that helped me to improve my skills. Every models have their own pros and cons. But will this be effective ? Can we make full use of these technologies if do continue like this ? It is true that these things are evolving and there is always a superior THING on the corner. But my questions is (Or rather lots of people's Q is), when should we do this ?

I think it is the question of how the existing technology meet our expectations and how good the new technology would help to improve our solutions. I think our temptation to learn new technology (Or on our thrill on use new technologies) often beats and we skip to answer those questions and just do it.

Any comments ?

Wednesday, September 19, 2007

Eric Evans' interview on InfoQ

Eric Evans talks on domain driven design in this interview. he is briefing on the basics of DDD and particularly on how you should focus on the domain targeted design and not on any underlying technologies. I think it is a common mistake that when we do the domain driven design we tempt to have a particular technology in the back of the mind, which will always disturb our focus.

He talks about the ubiquitous languages which have been used in such designs. The concern I have is, who is the right person to do such domain driven design. Is he the domain driven architect , project leader, tech lead or the pre sales person? And at which point this phase will fall in the process cycle? because we often have the communication gap on the domain driven design when it comes to the development life cycle. So we need to make sure the model and the ubiquitous language are properly conveyed through out the software life cycle.

Another important aspect I found is about on which areas of domain you have to model. Evans says, you should not model everything in the domain. It is very important to isolate the models and only focus on the complicated areas. Obviously overuse of DDD will probably affect the design as well as the business value.

Want to find more .. visit this interesting site on DDD

Thursday, September 13, 2007

Spring web services

Arjen Poutsma, project lead for the Spring Web Services framework, talks about the contract first approach in this interview , which we have discussed in my last post. Spring web services released very recently and looks impressive. I think it doesn't offer any client side support. Also we need to wait and see on Apache CXF 's response on this. Apache CXF is considered to be a good model to use with spring.

Monday, September 10, 2007

Interview on SOA and Web services

WS-* vs REST arguments are getting very popular these days. Knowing WS-* for some extent and read through the REST concept a bit, I don't understand the real fuss behind these arguments at all. Since all the big industrial giants are competing on this each other, I don't want to dig my little head in to it. But I just wanted to clarify something to myself. So this post is just to make my self clear on these technologies. I think by asking my own questions to me, I will be able find some answers or at least my friends will jump in to me and explain it. Let me try to make this post more interesting. I would like to invite Lasantha (The busy journalist in Sri Lanka) to interview me on this !!!


Lasantha : Hi JK, would you spend some time on a interview with me.


JK : Me ? With you, oh no, you are a very dangerous guy !


Lasantha : That's all past JK, Now I moved in to interview about these more safer WS vs REST arguments !


JK : Oh god, You will find it more difficult than your political interviews man. Anyway go ahead.


Lasantha : People are arguing on WS-* vs REST , SOA vs ROA etc etc. What's your opinion on this.


JK : First of all Lasantha, I do not belong SOA republican party or ROA conservative party. I am non-aligned guy or rather I know neither of these technologies very much.


Lasantha : Thats ok, what do you think about the SOA and ROA?


JK : Well, SOA is an attempt to approach a design on a service oriented way. You define the services independently first and then provide a way to the designer on how these services can talk each other. Basically it is an evolution of traditional modular programming. But different is , you embed the service call inside the codes in the traditional distributed programming, where in SOA you treat them as services and provide the meta data about a service to another service instead calling it from source code.


Lasantha : What's the point doing it ?


JK : There is a point. First of all you see a business driven design (Is it a new term or an obvious one ? ). Designers will start looking the system in a services oriented way. So reusability will become easy and sensible I suppose. Atomic level of modularity becomes larger and the a service it self will become an application you can see.


Lasantha : Yes I see a point.


JK : There is another important point as well. Remember I told you, a service will offer a meta data of it, where another service can use the meta data to know how to talk with that service. So just think like if that meta data becomes a standard one then every body will start using it. The so called interoperability will become easy then.


Lasantha : Isn't it the CORBA or DCOM trying to archive ?


JK : You started asking your political questions now. Well I didn't tell you that CORBA is different from SOA. You see, SOA is a concept. CORBA or DCOM or web services are technologies which can be used to implement a SOA application.


Lasantha : So SOA is not web service then ?


JK : Correct, no way both can be same. In your terms, SOA is like communism and web service is like Marxism . But you know there are Leninism or Maoism where all these are trying to archive communism in different ways.


Lasantha : oh Ok .. So SOA is more of a methodology while web service is a kind of representation or rather implementation right ?


JK : Yes great.


Lasantha : Then why people confuse on web service and SOA are same.


JK : It is like how REST people forgot about the origin of REST. REST was there even before, but now only it becomes a buzz word. Like wise the SOA concept is there even before these CORBA and DCOM came, but now only it shapes up well with web services. And full credit to the companies who successfully faltered people for make them believing SOA and web service are same !


Lasantha : That means web service is not the sole representation of SOA ?


JK : It cannot be sole ... but it is the best option available for now. Something like 'Dravid is the best currently available option for Indian captaincy !, but of course not the best '


Lasantha : Why do you think so ?


JK : Because Ganguly has just come back in ! Just kidding. Ok get back to the point. This is one thing I am also looking for. I think web services adopt SOA really well. Probably when web service was introduced there must be a good understanding on SOA. So web services concept or rather specs might have developed with SOA in mind.


Lasantha : Can you describe more please ...


JK : The key success for the SOA is on how you define the contract. That is very important since here machines talks each other without any manual intervention. The contract specified in web services is more clear , understandable which is in XML. More over CORBA is a an object oriented model. The client server coupling is very tight opposed to the web service's loose coupling. The wire protocol is SOAP which is much easier to work with than the binary format in CORBA. And I see CORBA as an underlying technology as it doesn't address the SOA approach in a broad way. But web service is giving is a big picture and it is very easy to design a SOA flavored application using web service than the CORBA. I think these days these kind arguments are not raising much after WS-I drops the RPC concept in web services.


Lasantha : Why is that ... RPC is a good model though.


JK : Thats the thing. You need to come out from the object oriented model which CORBA posts. SOA is more to do with services not operations. Traditional web services used to create the web service methods and functions at the source code level and the transform them is WSDL. For example you write method called foo in a class and create the WSDL from that java class.


Lasantha : What's wrong with it ?


JK : Every thing is wrong. You have to focus on the contract first, Otherwise you will often change your contract when ever you change the server classes. The RPC feature in WS had this impact. People often looked at the RPC web service as a replacement of CORBA. And they forgot to think about the “service” orientation there. Otherwise you would fall in to the reverse engineering I suppose.


Lasantha : So you think the contract first is very important ?


JK : I think thats entry point for SOA. It's like you have to design first without worrying about the underlying implementation. And when it comes to interoperability, it is even more important. You have to define the contract of your service properly. You got to implement the WSDL. This is all about messaging and services not operations.



JK : Beside that web services adopts the www very well as how it's name suggests. I think thats where it stands tall. Its focus on security and trust like features go beyond what http provides. It tries to standardize these features very well. For example, you can sign a SOAP message and send in a much secure way than what standard HTTP promises. Most of the times http can be safe enough but there are situations when you need more that what http addresses.


Lasantha : I think, we need to discuss this in details and should bring ROA or rather REST in this conversation. We will continue soon


JK : Yes true. It would be better if we continue this with REST later. See u then bye.

Friday, September 7, 2007

My blog on Spring

I set me a challenge to explain Spring J2EE framework to my 5 years old nephew and ended up messing everything. But still I blog this to open up a discussion on frameworks and Spring.
Some of real life metaphors can be confusing or argued other way. But I just tried to use it.

Your comments are welcome.

Click here to read and discuss it.

Tuesday, August 28, 2007

TIOBE Programming Community Index for August 2007...

Seems Java is still leading.


Source : http://www.tiobe.com

Thursday, August 9, 2007

Web apps scalability

Along with web 2.0, web traffic is getting increased day by day. Individual users as well as enterprises are getting into the the web eyeing on the SAAS type of application and moving away form the desktop apps.
The amount of traffic generated by these apps are tremendous, Just to name few
  • Over 100 million video downloads per day from YouTube.
  • More than 4 billion queries per day on FlickR.
  • Do I need to mention Google, .........
How these apps handle the traffic, whats the architecture behind, what are the technologies empower them ..... are some of the questions come to our mind,

I came across a good web site with some of the very good informations about these apps, answering above questions. It talks about how much scalable they are , and how it has been achieved and the technologies are being used.

Google Architecture info - here
YouTube Architecture - here
FlickR Architecture - here

Saturday, July 21, 2007

Using SVK for offline access to subversion

SVK is a distributed version control system. Since I've been working through a dial up connection to the internet, I was looking for a way to have offline access to source control. That way I will be able to view logs, diffs and even commit changes while offline. I've only used SVK for a small time, but it looks ideal for this task.

SVK commands mirror subversion commands so it's very easy to use if you are familiar with subversion. It has better support for branching and merging and doesn't keep any extra files inside your working copy (like CVS or .svn directories). SVK is built on top of the subversion and is written in Perl.

I particularly like being able to filter logs and edit the files being checked in while editing the log message: svk log --filter 'HEAD 15 | grep employer'

The easiest way to install SVK is to use your distribution's package manager. In Fedora, I could just use yum install perl-SVK (you need to have the Fedora extra repositories configured). This downloaded about 3MB of rpms so was quite ok on a dialup connection. For alternative methods look in the SVK book or Installing SVK

Once you have SVK installed, initialize your local repository (depot) with

svk depotmap --init

Initialise and sync a mirror for the remote repository with:

svk mirror https://orangehrm.svn.sourceforge.net/svnroot/orangehrm/trunk //orangehrm/trunk
svk sync //orangehrm/trunk

The sync command can take a while to complete, but you can interrupt in the middle and the next time you run it, it will start from where you stopped earlier.

Here //orangehrm/trunk is the mirror of the remote repository. While we can checkout //orangehrm/trunk and work on it, any commit will propagate to the remote server. That will not do if we are offline.

So we create a local branch.

svk copy //orangehrm/trunk //local/orangehrm

Now we can checkout the branch.

svk co //local/orangehrm

This will create a orangehrm directory and you can do all your work here. Check-ins go to the local branch so you don't need network access.

When you are online again, sync the mirror again, merge the changes to the local branch and update your working copy.

svk sync //orangehrm/trunk
svk smerge -Il //orangehrm/trunk //local/orangehrm
svk update (from your working copy)

You can also use svk pull instead of the last 3 commands. I prefer doing it this way because I can use the -Il options which apply each change from the remote server individually and uses the original log messages as commit messages.

Now if you have any changes in your working copy, check them in to the local branch.

Push the local changes to the remote server (first doing a dry run to check for conflicts):

svk smerge -C //local/orangehrm //oranghrm/trunk
svk smerge -Il //local/orangehrm //orangehrm/trunk

Again I prefer using -Il to get one commit to the remote server per one local commit but you can also have one single commit containing all the local changes. Using a single commit is faster and you may prefer it if using a slow connection to the internet. You might prefer the svk push command, which does the above two steps in one go.

SVK also supports mirroring CVS, Perforce and some other repositories.
I recommend you go through these tutorials and glance through the SVK book before using it.

Saturday, July 14, 2007

Reloading the spring context dynamically

For those who have used spring framework as a standalone application, might encountered a difficulty in reloading the application context. It is easier for its web application context but not for the standalone.
What are the limitations in standalone spring server for reloading the context?
1) You do not have an built in API for doing it.
2) Since this is standalone, you need a RMI like stub to talk with the standalone application context.

So what are the solutions we have for dynamically reload the context.
1) You can frequently reload the context (Using trigger or quartz scheduler whatever), But this is not good since you may only need to reload on demand most of the times.
2) Then of course you have to implement a RMI based client to tell the server to reload it's context.

Since the item 1 is more straight forward, we will discuss the solution 2 today.

The easiest way to reload the context remotely on demand is JMX. The flexibility and simplicity of using JMX in spring make this very simple.
The idea is the, you have the platform mbean server for jdk1.5 , so you can simply export a bean as MBean. So it is just a matter of having a MonitorMBean for reloading the context and call that bean for reloading the server context.

This is my Monitor MBean interface

public interface MonitorMBean extends Serializable {
String reload();
}

This is the implementation for the interface

public class MonitorMBeanImpl implements MonitorMBean {

/**
* The MBean implementation for reloading method
*
* */
public String reload() {
//StandaloneSever is the class whic has the spring application context
StandaloneServer.reload();
return "Successfully reloaded the etl context";
}
}

Here come my context.xml for the server (I explain bean by bean, the complete source code is attached anyway)

First we will have the mbean server

<!-- Starting mbean server -->
<bean id="mbeanServer" class="java.lang.management.ManagementFactory" factory-method="getPlatformMBeanServer"/>

We have a POJO based bean called monitorBeanLocal.

<!-- monitor jmx mbean for the standalone server -->
<bean id="monitorBeanLocal" class="hsenidmobile.control.impl.MonitorMBeanImpl" depends-on="mbeanServer"/>

Now we expose our POJO to be a MBean

<!--Expose out monitor bean as jmx managed bean-->
<bean id="monitorMBean" class="org.springframework.jmx.export.MBeanExporter">
<property name="beans">
<map>
<entry key="bean:name=monitorBean" value-ref="monitorBeanLocal"/>
</map>
</property>
<property name="server" ref="mbeanServer"/>
</bean>

Now lets have a RMI server connector

<bean id="registry" class="org.springframework.remoting.rmi.RmiRegistryFactoryBean">
<property name="port" value="1098"/>
</bean>

Of course we need the RMI registry also.

<bean id="serverConnector" class="org.springframework.jmx.support.ConnectorServerFactoryBean" depends-on="registry">
<property name="objectName" value="connector:name=rmi"/>
<property name="serviceUrl"
value="service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.01:1098/server"/>
<property name="environment">
<props>
<prop key="jmx.remote.jndi.rebind">true</prop>
</props>
</property>
</bean>

Thats all about the JMX part. But for our testing purpose I have a bean called WhoAmI

<!-- Sample bean to see how this is reloaded -->
<bean id="whoAmI" class="hsenidmobile.control.domain.WhoAmI">
<property name="myName" value="JK"/>
</bean>


This bean is just a simple java bean additionally having a print method.
public class WhoAmI {
private String myName;

public void setMyName(String myName) {
this.myName = myName;
}

public void printMyName(){
System.out.println("My Name is now " + myName);
}
}
Cool, now lets go through our main server class.

public class StandaloneServer {
private static AbstractApplicationContext context;

public static void main(String[] args) {
if (args.length <>");
return;
}
String contextXml = args[0];
context = new FileSystemXmlApplicationContext(new String[]{contextXml}, true);
context.registerShutdownHook();//This will be useful incase if you want control the grace shutdown.

printMyName();
}

/**
* Method for reloading the context
* */
public static void reload() {
if (context == null) {
throw new RuntimeException("Context is not available");
}
System.out.println("Reloading the context");
context.refresh();
}

/**
*Test method for context reloading
* */
private static void printMyName() {
new Thread() {
public void run() {
while(true){
((WhoAmI) context.getBean("whoAmI")).printMyName();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
//do nothing
}
}
}
}.start();
}
}
So we simply start the spring application there. Of course you can also see the simple method reload which is called by our monitor bean. The only different you would have noticed it, I use AbstractApplicationContext instead of ApplicationContext since it has the additional methods for our requirements.

Right we are done, Oh yes we need to test this, So how should we do. I give you a simple JMX client class to test this.

public class AdminClient {

public static void main(String[] args) {
String mbeanName = "bean:name=monitorBean";
MonitorMBean monitorMBean;
String serviceUrl = "service:jmx:rmi://localhost/jndi/rmi://localhost:1098/server";
try {
monitorMBean = createMbeanStub(mbeanName, serviceUrl);
monitorMBean.reload();
} catch (IOException e) {
System.out.println("IO Error occured while relading " + e); // Should use logger instead
} catch (MalformedObjectNameException e) {
System.out.println("Malformed error " + e); // Should use logger instead
}
System.out.println("The application context is reloaded successfully.");
}
}

private static MonitorMBean createMbeanStub(String mbeanName, String serviceUrl) throws MalformedObjectNameException,
IOException {
ObjectName mbeanObjectName = new ObjectName(mbeanName);
MBeanServerConnection serverConnection = connect(serviceUrl);
MonitorMBean monitorMBean;
monitorMBean = (MonitorMBean)MBeanServerInvocationHandler.newProxyInstance(serverConnection, mbeanObjectName,
MonitorMBean.class, false);
return monitorMBean;
}

private static MBeanServerConnection connect(String serviceUrl) throws IOException {
JMXServiceURL url = new JMXServiceURL(serviceUrl);
JMXConnector jmxc = JMXConnectorFactory.connect(url, null);
return jmxc.getMBeanServerConnection();
}
}

So here what we do is, we just invoke the monitor mbean's reload method to refresh the context.
Then So first you run the standalone
java hsenidmobile.control.StandaloneServer
You can see the output
My Name is now JK
My Name is now JK
My Name is now JK


Now you go and change the server.xml. Edit the whoAmI bean's name parameter from JK to CK. Then run our JMX client

java hsenidmobile.control.AdminClient
Now you can find the messages in the server console regarding to the reloading of the context. And also not the output message is changed to this

My Name is now CK
My Name is now CK
My Name is now CK
Cooool. Its simple as this.

So what we have done so far?

1) We are able to reload the spring standalone context remotely on demand. This enable us to change the server properties without restarting the server.

What we can do more?
1) If we have the properties in a database, or if you are willing to persist the properties in a file on the fly, then you can reload the context remotely by giving the arguments. You don't need to go and modify the server xml. (Thanks to JMX)

2) You have to be carefull about the singleton beans, since these beans will be destroyed and recreated for every reloading. So you may need to do the pre arrangement in the server before do the actual relaoding. But you will not need to worry about the non singleton beans. (There can be exceptional cases anyway).

3) You have to apply the AOP if possible. How about notifying the client application on reloading? You can do using spring AOP. I may put another blog on AOP soon. So stay tuned.

Ok we are done for today. Please find the attached codes for your reference.

BTW I used
jdk 1.5.0_11-b03 and
spring2.0.
The only dependencies are spring-2.0.jar and commons-logging-1.1.jar.

Click here to get the source codes for this sample.

Wednesday, June 27, 2007

hsenid becomes mysql enterprise gold partner



Nice to see hsenid becomes mysql enterprise gold partner. I feel we should have become this mush earlier than now since we have been using mysql heavily in our applications for long time. I think we have a good bunch of people in both our development and support teams who have experience in mysql.
I hope this move will give extra boost on our engineers and they will focus more and more on mysql expertises further.

Sunday, June 24, 2007

Eureka !!! Connecting to m1 3g vodafone modem in fedora7

Finally I managed to get my m1 3gsm modem working with my vaio fedora 7. This is the procedure I used for got it working.
My system is: Linux 2.6.21-1.3194.fc7 (fedora 7)
Wvdial version : WvDial 1.54.0

1) Create the following wvdial.conf
[Dialer Defaults]
Phone = *99#
Username = ppp@aplus.at
Password = ppp
Stupid Mode = 1
Dial Command = ATDT

[Dialer pin]

Init2 = AT+CPIN=5623

[Dialer A1]
Modem = /dev/ttyUSB0
Baud = 460800
Init3 = at+cgdcont=1,"ip","sunsurf"
ISDN = 0
Modem Type = Analog Modem


2) Then connect your vodafone mobile usb modem. (HUAWEI)
3) Wait till it is detected properly. (You can see logs from /var/log/messages)
4) Try to connect using wvdial.
wvdial --config wvdial.conf A1

For me it fails always like this
--> WvDial: Internet dialer version 1.54.0
--> Cannot get information for serial port.
--> Initializing modem.
--> Sending: ATZ
--> Sending: ATQ0
--> Re-Sending: ATZ
--> Modem not responding.


4) Don't worry about that. Apply the following commands
modprobe uhci_hcd ehci_hcd ppp
modprobe usbserial vendor=0x12d1 product=0x1003
rmmod usb-storage


5) Now remove the usb modem again and plug in back.
6) Wait till it is detected and try the following command again.
wvdial --config wvdial.conf A1

This time you should be able to shout "Eureka!!".
You should get the following message

[root@localhost linux-scripts]# wvdial --config wvdial.conf A1
--> WvDial: Internet dialer version 1.54.0
--> Cannot get information for serial port.
--> Initializing modem.
--> Sending: ATZ
ATZ
OK
--> Sending: at+cgdcont=1,"ip","sunsurf"
at+cgdcont=1,"ip","sunsurf"
OK
--> Modem initialized.
--> Sending: ATDT*99#
--> Waiting for carrier.
ATDT*99#
CONNECT
--> Carrier detected. Starting PPP immediately.
--> Starting pppd at Sun Jun 24 11:03:33 2007
--> pid of pppd: 5255
--> Using interface ppp0
--> local IP address 172.22.33.177
--> remote IP address 10.64.64.64
--> primary DNS address 10.11.12.13
--> secondary DNS address 10.11.12.14


Sorry I forgot one thing, You have to be a super user for doing this.

Hope you succeed on this. Else try plug it in back. It will work for sure.

Cheers!
References
1) Linux HSDPA Modem Huawai E220 with Gentoo (Provider: Ausrian A1) - QuirxiPedia
2) Vodafone 3G (UMTS) Howto

Tuesday, June 19, 2007

Getting Screenshots in symbian phones

Today I came across a situation where I wanted to get some screen shots of a Symbian application we developed to update some product documents. Did bit of surfing and ended up with a very nice tool called "Best Screen Snap" which gives similar experience like "Sangit" [Desktop screen capturing tool]. I think this is a very handy tool for mobile application developers. More importantly it is a freeware.
Click here for more info and download.

Sunday, June 10, 2007

Determining true memory usage in linux

Most of us are complaining high memory usage of applications running under linux by merely checking memory using 'top' command. Actually in Linux memory is never wasted. Almost all the free memory is used for Disk Caching. True memory usage can be determined by value of used buffers comes for 'free' command.
We also have lot of concerns over swapping. Linux mainly focus on services and greedy on allocating memory to applications. Anyway swapping can be configured using vm.swappiness parameter at /etc/sysctl.conf. This wiki has very useful information on linux memory management.

Saturday, June 9, 2007

Funny answers in mysql for IS NULL queries

Have you seen mysql returns non-NULL values for NULL query. If not you can try this out.
create table null_test(a int not null auto_increment, b int not null, primary key (a));
insert into null_test(a, b) values (0, 1); select * from null_test where a is null;

           | a | b |
+---+---+
| 1 | 1 |
+---+---+
Then retry mysql> select * from null_test where a is null;
Empty set (0.00 sec)

You can prevent this by
  • setting sql mode to NO_AUTO_VALUE_ON_ZERO. (Then next sequence number is generated only when NULL value for column is inserted.)
  • And setting off SQL_AUTO_IS_NULL server variable. (When ON it returns last inserted row for a table that contains an AUTO_INCREMENT column.)

This behavior is useful for ODBC programs, such as Access. But when such mysql table is restored from a dump, data become different.

"Flying High" Fedora 7

I upgraded my fedora 6 to fedora7 one week ago. There are lot of new features including the gnome 2.18 which is a better one for personal usages.Please refer release notes for further details.

Upgrade procedure is also very safe and easy. I first downloaded the ISO DVD image from here. Then burnt a DVD and upgraded my fedora6 straight away. I took the risk by not getting any backups before doing it. For downloading ,they are recommending bittorrent, It was very slow by that time(Probably I did the download soon after the release and very few in my closer region must have been using it.) , so I used the ftp instead.


This time fedora is promoting the "Flying High" theme contrast to their earlier themes.

They shipped FireFox 2.0 with this release where in fedora6 they didn't recommend to use FireFox2.0. There are lot other features like SE LINUX GUI, mplayer supports, pidgin(former gaim) etc.

I am still struggling to following things and still working on it.

1) For my VAIO FE35GP model, I could not get my in-build wifi working.
2) Motion-eye camera is not working.
3) I am using m1-vodafone 3G usb modem , which is not working in fedora.

All over fedora 7 is looking good and it has lot of applications shipped with, which good for people who use fedora as their personal OS.

Cheers


Monday, June 4, 2007

Carnegie Mellon West - The New Software Industry Conference 2007

Carnegie Mellon West
The Fisher IT Center at the Haas School of Business
Services: Science, Management, and Engineering Program at U.C. Berkeley

Presented a conference on

The New Software Industry:
Forces at Play, Business in Motion

The event took place on April 30, 2007.

Globalization, outsourcing, and world-flattening advances in technology continue to rock the software industry in ways that will significantly alter the way that technologists do business. This conference brought together academics and industry specialists to explore the background setting, the current status, and the future of the software industry. [1]

Watch the videos
Presentation slides

Reference: [1] Carngie Mellon West Website (3 June, 2007)

Saturday, June 2, 2007

Determine Resource Usage for a SQL session

Now we can use profiling session variable to determine resource usage for a sql session.
It is introduced in mysql 5.0.37.
All the information of variable usage can be found at
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html

This gives an idea of information can be extracted from the variable.

mysql> SHOW PROFILE CPU FOR QUERY 2;

+----------------------+----------+----------+------------+
| Status               | Duration | CPU_user | CPU_system |
+----------------------+----------+----------+------------+
| checking permissions | 0.000040 | 0.000038 |   0.000002 |
| creating table       | 0.000056 | 0.000028 |   0.000028 |
| After create         | 0.011363 | 0.000217 |   0.001571 |
| query end            | 0.000375 | 0.000013 |   0.000028 |
| freeing items        | 0.000089 | 0.000010 |   0.000014 |
| logging slow query   | 0.000019 | 0.000009 |   0.000010 |
| cleaning up          | 0.000005 | 0.000003 |   0.000002 |
+----------------------+----------+----------+------------+


All the profiling information is stored in PROFILING table in INFORMATION_SCHEMA database. More details can be extracted by directly querying the table.
http://dev.mysql.com/doc/refman/5.0/en/profiling-table.html

As this information is session based, they will be lost when session ends.
Profiling information will be very useful for estimating performance of mysql queries.

Friday, June 1, 2007

Offline readings with google reader

This may be a great news to people who carry laptops and who use google reader as their RSS or ATOM feed readers or any goooglers . You always do not need internet to read articles anymore. Just download your subscribed articles [2000 articles] to Google reader when u have internet access, and read them leisurely[ while your traveling, in the flight, in the bus ]...


The newly released Google gadget Google gears [a browser based plugin] enables off line reading on google-reader. You can download upto 2000 articles.


Google recommends Firefox 1.5+ or IE 6+. But some how I was only succeeded with Firefox but failed with IE 6 [understood still it is a beta version].

Tuesday, May 29, 2007

Presentation on GWT

I listen to Bruce Johnson's presentation on GWT. It was like an introductory presentation where he really didn't dig in to GWT. But this is a very good one for understanding the road map to GWT. He justifies the decisions nicely.

It would have been great if he discussed the limitations of GWT also. While understanding the difficulties in java2javascripting , there are situations where we really struggle due to it's limitations.

One of those limitation where we were in trouble is the serialization part. GWT has its own serialization model for their own reasons(!) and we had to write our delegates for the third party modules for using them with GWT. Good news is, GWT team now planning to move away from IsSerializable interface in coming versions!

Ofcourse there are lot others and fortunately we are able to get rid of those issues one by one.

Those who are interested in listening the presentation, click here.

Tuesday, May 22, 2007

How to create a label cloud in Blogger

Blogger has the concept of 'labels' which are similar to 'tags' which is an important Web2.0 concept. However, by default Blogger does not come with a tag/label cloud. I found an interesting article on the Web about how to do this.

Read

I will try to implement the tag cloud for this Blog soon.

Saturday, May 19, 2007

IVY dependency manager (Part1 Introduction) Contd....

Himath requested to discuss the dependency management in detail. Thanks himath for your feed back. I will try to explain this in more detail on this post. Today will discuss on dependency management in detail before go in to the IVY guide lines.

Dependency management is , like its name suggests ,
Managing the project dependencies in a centralized storage.
For example, if we are using spring-1.2.8 in our projects A and B. Then assume spring 1.2.8 has dependencies of commons-logging-1.0.4 and log4j.1.12 dependencies. So what we have to do is , we should have a lib folder inside our projects and have these all dependencies into it. So what are the headaches for doing this.

  1. You have to keep remember of these dependencies each every time you add a new lib. For example if you are going to have new dependency like foo.jar , the you should know which are the dependencies of foo.jar.
  2. You have to keep jar files in your project repositories. For example, we may need to keep spring jars for all the project repositories which uses spring.
  3. Switching from one version to another is very difficult. For example I am moving from spring 1.2.8 to spring 2.0 version. Then again I have to add the new dependencies to my project dependency folder.
  4. Maintaining project artifacts is difficult. For example we have sub modules a, b and c in our project and we need to b and c jars for building a, we may need to build the b and c first before build the a.jar, even though we haven't changed the code base of b and c.

Above are few of the many problems we face when we don't have a centralized dependency management system.

So now lets discuss the advantages of having a centralized dependency management system.
If we have a dependency management system , then all of our artifacts (For us, it is mostly jar files) are in the central repository (Another svn repository). Every artifact has a configuration file where you have your dependency information. This is an extract of a configuration file for a sample. (This is taken from one of our projects etl-server)


Here you can see etl-server-core-1.1.jar is the artifact and dependencies for this artifact are listed inside the dependencies tag. So that tells the story. You have spring 1.2.8 is in the dependency list. So you don't need to separately keep the log4j dependencies here since you have those entries already inside the spring dependency file. If you look this in to more, which ever the projects using the etl-server-core-1.1.jar, they don't have to keep the these dependencies inside their list. Because these dependencies are already referred through etl-core-server jar.

So lets list down the point now.

1) Using dependency management we can maintain the centralized repository. So in your project page, you only need to use a XML file listing your dependencies. Ivy will take care of downloading them from the dependency repository when we build the project using ant.

2) Ivy maintains a local cache directory (possibly inside your /.ivy folder) for your downloaded dependencies, so you don't need to download them each and every time. This download is one off.

3) To publish your project artifact to repository is as easy as using an ant command publish!

4) Due to the chain of dependencies concept, if you refer one artifact foo.jar in your dependency file, then whatever the dependencies needed for foo.jar will be taken care of foo's dependency file. You don't need to keep them inside your dependency file.So it makes your life much easier.

So for more details it is better to have a go through the IVY DOC site before we discuss this matter in to more detail.

Sunday, May 13, 2007

IVY dependency manager (Part1 Introduction)

Its a Sunday noon in Singapore. I just had a chat with Chirantha who is based on Royal Brunei these days. He just joined to our blog. Welcome chirantha. Awaiting few blogs from you on your mysql expertises :)

This is an attempt to introduce our recently setup dependency repository. As a step forward to make full use of version controlled source management, we have started using our own internal artifacts repository for some time now.

I try to split the topic in to three.
1) IVY Dependency manager - Introduction.
2) How to use our repository.
3) How to publish artifacts to IVY.

This week we will discuss on the ROADMAP to IVY :)
Its been long time our engineers have been feeling of using a common repository for our artifacts. Having common dependency manager is good for many reasons. Especially we don't have to keep individual dependencies for each and every product. The version management is very easy. So it was decided to go for a repository internally.

Now we need to decide the right choice. The main two candidates were Maven and IVY . After several considerations we decided to vote for ivy. Lets discuss on what caused us to choose IVY for our repository.

  • We thought IVY is a handy light weight tool on it's own grounds.
  • Considering the scope of our requirement, which we mingle with our own proprietary and open source artifacts we do not need to use maven ibiblio or any other out of box repositories.
  • We all are used to work with ant. The complexity in using maven with ant is a widely known fact! . Probably maven expects users to do everything using maven. Basically maven is a project management tool where IVY targets dependency management.
  • The IVY support to ant is really good. Publishing artifacts to repository can be easily done through ant and IVY.

So we decided to use IVY for our own reasons. Maven has it's goods and bad so IVY too. For more comparisons refer IVY site.

So stay tuned with hsenidians for my next post on how hsenid is using IVY.


BTW It is notable that IVY is still in apache incubator. I don't understand this. Though people used to compare IVY with Maven, they are not competitors :) . They are different tools and the level of scopes are different. So we mark our big +1 for IVY ;)

Saturday, May 12, 2007

Software Architecture Series - Peer-to-peer Architecture

I thought of writing a series of posts on Software Architectures. In this first post, I will discuss Peer-to-peer (P2P) Architecture.

In a P2P architecture there's no notion of clients and servers; equal peer nodes show the behavior of both clients and servers.

In a P2P architecture clients provide resources (bandwidth, storage space, computing power) and results in a more robust systm with data replicated over multiple peers with no single point of failure.

Applications of P2P Architecture can be categorized into the following areas:
  1. Communication and Collaboration - Chat/Irc, IM (Aol, Icq, Yahoo, Msn), Jabber
  2. Distributed Computation - use peer computer processing power
    • Seti@home - Search for Extra Terrestrial Intelligence (SETI)
    • genome@home - understand gentic information
  3. Database Systems
    • Local Relational Model (LRM) - relational queries to run across 1000s of computers
  4. Content Distribution Infrastructure for sharing digital media and other data
    • Most P2P are under this category. e.g. Napster, Publius, Gnutella, KaZaa, Freenet, MojoNation, Oceanstore, PAST, Chord, Scan, FreeHaven, Groove, Mnemosyne
I personally found the concept of contributing to searching for extra terrestrial intelligence and understanding genetic information by sharing ones PC resources interesting.

A good example of a P2P application of today is BitTorrent [2].

References:
[1] A Survey of Peer-to-Peer Content Distribution Technologies
[2] Wikipedia (BitTorrent)

Friday, May 11, 2007

Getting Things Done

Is there a Silver Bullet to Time Management? This is a question I have been trying to answer for quite some. Now I have come to the conclusion that there is no such silver bullet but each person need to choose the best method that suits him or her.

We've come a long way since simple TODO lists. Infact there are quite a few modern ways of time management. Stephen Covey introduces a great method in his book "The Seven Habits of Highly Effective People" and it's sequel "First Things First"

I liked the concept of focusing on the important things and not the urgent things that we tend to focus on. Just knowing these principles could improve one's time management a lot.

Tony Robbins also has his time management method called "Time of Your Life". Here he focusses on the importance chunking and grouping of related tasks together and being emotional about the achievement of the tasks. This method is more heavy weight than the Covey method and for someone who's learning it it could take about 1 hour a day just to do the planning. So to use this method one has to be really disciplined.

A recent time management method I stumbled across is called "Getting Things Done" by David Allen.


I got an eBook and yet to apply the whole book. At least I've been using his concepts for my email which, is in hundreds per day. So far this has been the most effective method I've used with my emails.

This is simple and light weight. Unlike the previous two methods, you don't have to spend too much extra time doing the planning.

I used to delegate a lot of tasks to lot of people and found it difficult to keep track of all tasks. This is expecially difficult as some people tend to forget. Using this method it's really easy to keep track of delegated (Waiting for Tasks) .

Well, in conclusion I would say there's no silver bullet but knowing all these techniques would really help as in this age of balancing 1000s of tasks we cannot only depend on todo lists anymore.

Wednesday, May 9, 2007

Random thoughts - Software Architecture to Toastmasters

Last two days have been quite warm here. Yesterday the temperature rose to about 94F. This is infact hotter than a typical day in Colombo.

I've been working on a briefing for Carnegie Mellon and subject was Responsibilities of Software Architects. This is a topic where there are many definitions. Search the web and you'll probably find 100s of defintions. I did a little bit of study my self. At the end I could identify 4 main responsibilities of Software Architects. Hundreds of specfic responsibilities could be categorized under these. The 4 categories of responsibilities are:

  1. Architecting and maintaining the architecture
  2. Communicating (especially the architecture) with all stake holders
  3. Providing Technical Leadership to the project
  4. Acting as a Consultant
What surprised me was that except for the first role, all other roles involved communication and leadership to a very high extent. Simply said a good architect must be a leader and a powerful communicator as much as have the requried technical skills.

This is something most technical people ignore. This brings me back to the title. I joined a Toastmasters club about a month ago and today I did my first prepared speech. Toastmasters is an excellent way to improve your communication skills which I recommend to all.


Monday, May 7, 2007

Marking 10th year anniversary


As we hsenidians marking our 10th year of successful run, it is time to move on. The best part of the hsenid culture is adopting the new methodologies. There are lot of examples made in the past. The way we adopted the open source culture and AGILE process was one of those such examples.
So what's now? We did smell the much hyped(Hype is a bad word though,Tim O'Reilly will tell more here) web2.0 as an important entity in coming years. We have shown our commitment through web2.0 expo last month. Oh yes more pictures from our orangehrm blog. (Folks from oragehrm will take this for sure ;)

So hsenidians from various parts of the world (You can take USA, Malasia, Singapore, Brunei, Kuwait, India and Sri Lanka of course ! as those various parts ) feel better to maintain our own developer blog. We know, we expertise on certain areas and its time to pay back to the world of web, which helped our way along last 6, 7 years .(Innocent folks , please trust me ;) )

So whats next, we will keep on blogging. Of course you can expect good feeds from our experts' areas !