COVID-19 Statistics

Given the latest updates on the COVID-19 (commonly known as the Corona Virus) virus, which is increasing the number of deaths in China and Europe in particular, I have decided to work on a small project which statistically shows the number of deaths on a global level. The map below is updated on a daily basis.

I only had a couple of hours to work on the project, so bear with me on its simplicity, however I believe that it shows some very important information.

The data is taken from the John Hopkins University and is updated on a daily basis. Let’s look into a little more detail to understand the dynamics of those effected by the virus.

First of all, one important factor to take into consideration is the age of the people who have died due to the COVID-19 virus. Just by taking a look at the graph below, we can immediately identify that the largest number of fatalities starts rising in the groups of people over the age of 60, most of the people being effecting are in the 80+ range.

!function(){“use strict”;window.addEventListener(“message”,function(a){if(void 0!==a.data[“datawrapper-height”])for(var e in a.data[“datawrapper-height”]){var t=document.getElementById(“datawrapper-chart-“+e)||document.querySelector(“iframe[src*='”+e+”‘]”);t&&(t.style.height=a.data[“datawrapper-height”][e]+”px”)}})}();

Apart from that, another element to take into consideration are the pre-existent conditions which led up to the patient being diagnosed with Coronavirus. Below are some of the conditions which patients had before being infected by the virus.

Hopefully through this mini-project, you readers will get a better understanding of what the whole hype is about.

Utilising a Graphical Processing Unit to its full Potential

During my Masters studies in artificial intelligence, I came across a paper by 3 PHD students from the Vilnius University in Lithuania while researching for papers about machine learning and algorithmic trading. The paper spoke of the diverse approaches taken by different researches on the use of Graphical Processing Units (GPUs) to process high frequency algorithmic trades. This got me thinking. How could a GPU help in the processing of trading ticks? What impact does a GPU have in mining cryptocurrencies? How difficult is it to actually use the performance power in a GPU to program crypto mining and high frequency trading? But to answer these questions, we need to first understand the processing power in a GPU and how this has exploded in the last decade.

Back in the days when I was still in my early twenties and still an avid PC gamer, we saw a surge in the demand which new games were asking for. This was because these games were not just providing relaxation time to the gamer, but were striving to provide a completely immersive gaming experience.

Game Development companies started giving us more open world and secondary in game missions, which not only lengthened the amount of time to finish a game, but also let the user have a more immersive experience of the game. This non withstanding the improvement in graphics, giving a more lively experience to the games in themselves. Let’s take for example any game in the Assassin’s Creed series. This is a first person shooter game with the main mission always being to rank up the enemy line of imposters, gather more information and finally find the location of the main boss and put an end to him. A gamer can simply follow that path and that would be enough to complete the game. However it would be a pity to lose on the many side missions each game in the series offers. The game producers do a wonderful job to create a beautiful graphically appealing near to life open world (since the game is nearly always set in a an actual city).

This improvement in games, gave space for GPUs to be manufactured as a dedicated module in a PC or laptop. We started seeing GPU manufacturers such as Nvidia, ATI, EVGA and MSI produce their graphics components which connected to the PCI Express motherboard bus; which was 2 – 3 times faster than the earlier PCI and ISA buses. Apart from that, GPUs now their own power directly from the power supply rather than through the motherboard. What this did in essence was let graphics hungry programs to take advantage of the dedicated GPU, rather than use the CPU processing power to run them. Dedicated GPUs are autonomous in a sense that they have their own processing cores and memory which is independent from the machine’s memory.

The architecture of a GPU is designed to handle many computer intensive, memory intensive processes so that it becomes the ideal hardware to take care of data intensive processes. But what happens when the user is not running a game or a graphically intensive program such a post-production video editing or 3D graphical rendering? Most of the time the GPU’s power is not fully utilized. This is where the GPU usage for algorithmic trading and crypto mining comes in.

The two main criteria for algorithmic trading are the speed at which the same set of computations can be performed on multiple sets of that data; and programmability. For this principle, the CPU is not a suitable enough component to run the process. Running data processes on a GPU helps to achieve a higher standard of processing and a quicker throughput so that when combining multi￾core processing with GPU performance, we can get the best outcome of a machine learning process. Programming for a GPU is however an intensive task. It is not that simple to program a process which uses the GPU resources in back testing. In particular introducing double loops and random access patterns is not simple on a GPU. For this reason most of the batch processes being sent to the GPU need to be pre-calculated in the CPU and then passed onto the GPU. Nvidia were one of the first companies who created GPU hardware which could be made easier to program through their invention of the CUDA technology. (Compute Unified Device Architecture.) The way CUDA works is by allowing developers to use MATLAB so that when a program running on the CPU invokes a GPU kernel, many copies of that same kernel are distributed to different multiprocessors under the form of threads, and are executed. This concept has revolutionized the way trades and trading computations are done over the past decade. Using a GPU for high frequency trading has helped to give a low latency to the computations and allow for every single minimal change in frequency due to the high volatility of algorithmic trading. This caters for the market demand in high frequency trading where the execution of computerized trading strategies is characterized by extremely short position holding periods.

The success of a high frequency trading algorithm depends on its ability to react to a change in the financial situation faster than others. Sometimes the change can occur so quickly that a new term has been coined for this kind of change: Ultra High Frequency Trading. This allows traders to exploit minimal changes to the financial data to that they can make the best profit value.

The most promising machine learning algorithm used in GPUs is SVM, that can be conveniently adapted to parallel architectures. The last decade, many works have created programs to accelerate the time consuming training phase in SVMs on many-core GPUs.

Hierarchical Decomposition Algorithm for Support Vector Machine Training , J. Vanek introduced a new approach to the support of using vector machine training with GPUs and called it: Optimized Hierarchical Decomposition SVM (OHD-SVM). It uses a hierarchical decomposition iterative algorithm that allows using matrix-matrix multiplication to calculate the kernel matrix values. The biggest difference was on the largest datasets where they achieved speed-up up to 12 times in comparison with the fastest already published GPU implementation.

Others have also programmed algorithms for GPUs to help in the Deep Learning field. Some developed general principles for massively parallelizing unsupervised learning tasks using graphics processors and shown that these principles can be applied to successfully scaling up learning algorithms for both deep belief networks (DBNs) and sparse coding. Their implementation of DBN learning is up to 70 times faster than a dual-core CPU implementation for large models.

The improvement of GPUs and its processing power has not only revolutionized the gaming industry but has also helped in the representation of high frequency algorithmic trading in the Fintech industry. This does not limit GPU processing power just for games and graphical rendering, but lets us use the full potential of such a component.

The Inclusion of Catholics

A couple of days after the release of our bishops’ publication of their guidelines on how to interpret chapter 8 of Amoris Laetitia, I decided to go and watch Martin Scorsese’s Silence at the cinemas. At the time, I hadn’t made an analogy between the two events yet, however while watching the movie, it struck me.

share.jpg

The movie is based on Endo’s book, bearing the same title and portrays two Jesuit priests who embark on a voyage to Japan in the 16th century, on a mission to find their Jesuit teacher, Fr. Ferreira; who they heard had apostatized (denounced Catholic religion) and to convert other Japanese natives to Christianity. Through the course of the movie we however learn that because of the Japanese culture, it becomes extremely difficult to convert the natives. We also learn that most converts here were being persecuted in the harshest of manners, so as not to profess their religion. We learn that Fr. Ferreira himself had indeed apostatized and towards the end of the movie, so does Fr. Rodrigues, the main protagonist.

Throughout the movie, (me being a rather fervent Catholic layman), I was constantly posing the question: Is it legitimate to denounce your Catholic faith in a country which is actively persecuting your religion? Is it politically and socially diplomatic to renounce your faith for the sake of your own life? Or was the behavior of Fr. Ferreira and Fr. Rodrigues a cowardly move to be accepted into such a country and stop the persecution? Scorsese emphasizes the fine line between doing the will of God with a ruling iron hand as opposed to being practical and maybe portray a more loving God who dislikes the shedding of blood, even if this was done in his name.

The Catholic world is currently confronted with a similar situation this time within itself. A simple read through the newspapers or a Google Search can clearly show the distinction between the two ideas which have emerged. Following the synods on families held in Rome, the publication of the Apostolic Exhortation Amoris Laetitia, and more recently in our diocese, the guidelines published by our own bishops, much has been said in favor and against the new theology being proposed by Pope Francis. The encyclical letter published last year has indeed shed a new light and emphasized on the inclusion of faithful Catholics, who following a failure in their first attempt at marriage, were until recently not allowed to receive Holy Communion. Our beloved Pope has out of a loving heart and a discerning mind shown a compassionate face of the Catholic Church. This letter is not at all a cry out against Maltese priests and the Catholic community who are asking for the clarification of chapter 8 of Amoris Laetitia. Rather, these are the questions which I have asked myself, as I try to transcend the signs which our Lord is trying to bestow upon us through our Pope’s teachings.

Going through the previous Papal teachings, particularly that of Saint John Paul II it had been already stated by the Congregation of Doctrine and Faith in 1994, headed at the time by Joseph Ratzinger (later to become Benedict XVI) that: “Pastors are called to help them [those who are currently in the state of irregular marriage] experience the charity of Christ and the maternal closeness of the Church, receiving them with love, exhorting them to trust in God’s mercy and suggesting, with prudence and respect, concrete ways of conversion and sharing in the life of the community of the Church.” Isn’t this theology exactly what Amoris Laetitia is proposing to the Catholic Church? Isn’t it also what the guidelines proposed by the Maltese bishops adhere to?

From where I see it, we need to start with an initial argument. The Sacrament of Holy Communion is not a right to all who have gone to MUSEUM and received their first Holy Communion in their childhood days. Holy Communion is a gift, given daily by Christ himself to the faithful who wish to grow spiritually in their relationship with God together with the community around them. Us as faithful practicing Catholics are in no position to bestow upon us the right to receive the body of Christ in communion, simply because it is by God’s grace that we can be in a receptive stance to receive him.

Image result for holy communion

Having established that argument, we can move on argue that since Holy Communion is a gift given to us by Christ himself, then we as fellow recipients of this Sacrament have no right to infer the gift to others. This is especially true when the authority who is asking us to consider each failure in matrimony as a separate case, is doing so Ex Cathedra in an Apostolic Exhortation enlightened by the Holy Spirit. What comes to mind right now is the parable of the workers in the vineyard who were all asked by the landlord to work at his tenant for a dinar a day. (Mt 20:1-16) Irrespective of the time they started working, they were all paid a dinar. When the workers who had been working, all day asked the landlord why they weren’t paid more, his reply reflected his own generosity. Our decision to rebuke the official Catholic teachings and the guidelines given by our bishops makes us sound like the early workers in this parable.

Having set an argument by which I as a Catholic person have no right to judge others; since I haven’t been through the same experiences, I believe our role here is show love and compassion, a characteristic which is lacking in a contemporary society. This role needs to be shown hand in hand with the priestly vocation to form the conscience of his people and lead them to take a better decision, and discern their position adequately. Very conveniently, this Sunday’s Gospel message intertwines very well with this message. Today’s reading shows us a Christ who is rebuking the Jewish authorities over their old commandments given to them by Moses at the Sinai. Most of these were then also written down in the Torah and were observed to the letter by the Jewish community. Christ’s message today takes the old legacy and transforms it into a new one, based on the concept of love towards others. In his own words; “Do not think that I have come to abolish the law or the prophets. I have come not to abolish but to fulfil.” (Mt. 5: 17). We all know what the mission of Christ was, how he fulfilled it, what it led him to and the grace bestowed upon us through his deed.

Pope Francis May 15, 2019. Credit: Daniel Ibanez/CNA.

Let us not fail to recognize the signs of the times which are given to us from above. Let’s not call Pope Francis our Pope only when his teachings conform to our norms and traditions. Let’s be ready to open our arms wide to accept this community of people. Let us not be like many of the Japanese martyrs and die in vain to hold firm to our religion, but rather promote a Church who opens its door to all who wish to encounter the loving face of Christ.

Testing the App – Part 2

In this second part regarding tested, we will go through the next fours tasks which took place in the testing phase. In the previous blog post, three of the four test were successful, therefore 75% of the testing was successful. In this post, the following tests will be conducted:

  1. Checking the notification on different devices once an event is nearing.

Following the test that events are successfully parsed from the database, in the test, we made sure that the notification is shown 2 days before the event takes place.

This test was successful.

2.   Checking access to the TiBi facebook page.

This test was used to make sure that the TiBi official Facebook page is shown in a webview when the fragment is loaded. Most of all, we made sure that the latest updates are also being shown on the Facebook page and parsed into the application to be shown to the users.

tibi

This test was successful.

3.   Checking that an email is sent to the correct address when providing feedback.

When sending feedback to the administrative team, we needed to make sure that the details would be sent correctly and that the email would be processed to the correct address. In this test, we also made sure that the app is able to connect to the internet and raise an exception if not. Both tests were successful.

4.    Checking that the location of the selected event is viewed correctly on the map.

A final test makes sure that the map is updated to show the correct geolocation of the event once an event is clicked. We made sure that the map was updated once the event location changed. This test was successful too.
tibi

Testing the App – Part 1

Now the project has heen completed, the testing phase needed to take over actual coding of the program, to make sure that both server and client ends of the project work in syncronisation to achieve a sound experience on both levels. To test the project, a number of potential users were selected to be able to test the app.

The tests which were made are the ones detailed below:

  1. Creation of a new user on the Database and test login on client app.
  2. Resetting the password for a user.
  3. Creation of a new event  and inspiriational quote in the Database end and its view on the client app.
  4. Creation of a new group chat, making sure chat log is sychronised across all devices.
  5. Checking the notification on different devices once an event is nearing.
  6. Checking access to the TiBi facebook page.
  7. Checking that an email is sent to the correct address when providing feedback.
  8. Checking that the location of the selected event is viewed correctly on the map.

1. Creation of a new user on the database and test login credentials on the app

Here, we created a new user on the database first of all. All user detials will be handled by a user who is familiar with the phpMyAdmin interface, and therefore will be using it as the main backend interface. For this reason, for the time being, there will be no need to develop a backoffice online interface. If the need rises, this will be developed later.

newUser

login

The test has been successful.

2. Resetting a password for a user

This test was meant to ensure that the update statement included in the resseting of the password works correctly. The user Abigail Muscat had her password reset, through the front end as follows:

newpass

newuserdb

This test was also successful.

3. Creation of a new event  and inspiriational quote in the Database end and its view on the app.

For this test, we made sure that the app would extract the daily inspirational message from the database and parse it to be shown to the users on the screen.

quote

quoteapp

Here again, the test was successful.

4. Creation of a new group chat, making sure chat log is sychronised across all devices.

For the fourth test, we needed to make sure that the chat application was being syncronised among all devices, and that each device received the group messages in a timely manner. This test failed for the reason that the code used to call the webservice was depreciated, and is not anymore supported. Therefore the fragment itself could not build.

For this reason, we have decided to pull out this part of the project from the application and research the capabilities of building a chat screen as part of our application, which will be released in future updates to the app. It was also decided that if this feature were to be developed in the future, then the requirements would include giving each group chat particular subjects, giving this feature the idea of a forum, rather that a simple chat feature.

Sending Notifications to users

The notification service will be used whenever an event is nearing by, to promote it and remind all the users of the application about the event. This can be done through a number of ways, which are detailed below:

Google Cloud Messaging: As per Google’s documentation “Google Cloud Messaging (acronymed as GCM) is a service that helps developers send data from servers to their Android apps”. Using this service you can send data to an application whenever new data is available instead of making requests to server in a timely fashion. Integrating GCM in your android application enhances user experience most of all, apart from saving loads of battery consumption. The servers can be of any kind, however, Android tends to work best with the PHP servers.

Android Notification Service: A notification is a message you can display to the user outside of your application’s normal UI. When you tell the system to issue a notification, it first appears as an icon in the notification area. To see the details of the notification, the user opens the notification drawer. Both the notification area and the notification drawer are system-controlled areas that the user can view at any time. Notifications, as an important part of the Android user interface, have their own design guidelines. The material design changes introduced in Android 5.0 (API level 21) are of particular importance.

In my app, I have decided to implement the second method.

Below is the code which sets up a new notification:

public class NotificationView extends Activity {
@Override
 public void onCreate(Bundle savedInstanceState) {
 super.onCreate(savedInstanceState);
 setContentView(R.layout.notification);
 }
}

Once this is done, the next step would be to create another class which extends the Broadcast receiver:

public class NotificationEvent extends BroadcastReceiver{
 private int notifID = 100;
 private int notifNum = 0;
 private NotificationManager notifManager;
@Override
 public void onReceive(Context context, Intent intent) {
 popupNotif(context);
}
protected void popupNotif(Context context) {
 PendingIntent contentIntent = PendingIntent.getActivity(context, 0, new Intent(context, NotificationEvent.class), 0);
 NotificationCompat.Builder evtNotif = new NotificationCompat.Builder(context);
 evtNotif.setContentTitle("TiBi Event");
 evtNotif.setContentText("");
 evtNotif.setTicker("");
 evtNotif.setSmallIcon(R.drawable.ic_launcher);
 evtNotif.setDefaults(Notification.DEFAULT_SOUND);
 evtNotif.setAutoCancel(true);
 evtNotif.setNumber(++notifNum);
 Intent result = new Intent(context, NotificationView.class);
 TaskStackBuilder stack = TaskStackBuilder.create(context);
 stack.addParentStack(NotificationView.class);
 stack.addNextIntent(result);
 evtNotif.setContentIntent(contentIntent);
 notifManager = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE);
 notifManager.notify(notifID, evtNotif.build());
 }
protected void cancelNotif() {
 notifManager.cancel(notifID);
}
}

Once this is completed, the final step would be to call the event notifier in the main activity to display it to the users. This is done through this code:

Intent intent = getIntent();
 String date = intent.getStringExtra(EventFragment.DATE);
 String time = intent.getStringExtra(EventFragment.TIME);
long _date = Long.parseLong(date);
 SimpleDateFormat year = new SimpleDateFormat("yyyy");
 SimpleDateFormat month = new SimpleDateFormat("MMM");
 SimpleDateFormat day = new SimpleDateFormat("dd");
long _time = Long.parseLong(time);
 SimpleDateFormat hour = new SimpleDateFormat("hh");
 SimpleDateFormat minute = new SimpleDateFormat("mm");
year.format(new Date(_date));
 month.format(new Date(_date));
 day.format(new Date(_date));
 hour.format(new Date(_time));
 minute.format(new Date(_time));
int yyyy = Integer.valueOf(String.valueOf(year));
 int MM = Integer.valueOf(String.valueOf(month));
 int dd = Integer.valueOf(String.valueOf(day));
 int mm = Integer.valueOf(String.valueOf(minute));
 int hh = Integer.valueOf(String.valueOf(hour));
cal = Calendar.getInstance();
 cal.set(Calendar.HOUR_OF_DAY, hh - 1);
 cal.set(Calendar.MINUTE, mm);
 cal.set(Calendar.DATE, dd);
 cal.set(Calendar.MONTH, MM);
 cal.set(Calendar.YEAR, yyyy);
Intent notifEvent = new Intent(this, NotificationEvent.class);
 PendingIntent pi = PendingIntent.getService(this, 0 , notifEvent, PendingIntent.FLAG_UPDATE_CURRENT);
 AlarmManager am = (AlarmManager) getSystemService(ALARM_SERVICE);
 am.setRepeating(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), AlarmManager.INTERVAL_DAY, pi);

Android Volley – Second Part of Two

The whole reason why the Android Volley library is being used, will be shown in the following tutorial. It will be used to parse the JSON data being retrieved from the database. JSON is very light weight, structured, easy to parse and much human readable. JSON is best alternative to XML when an Android app needs to interchange data with the PHP server.

Written by Ficus Kirkpatrick and his team, Volley is a library recently released by Google at I/O 2013. The Google Play Store and a number of apps by Google use this library to perform network requests and remote image loading. The developers at Google claim that network requests performed through Volley are up to 10 times faster than other libraries according to their tests. Volley is a Google library for Android that makes networking and remote image loading easier and faster.

In the below code, the two methods makeJsonObjectRequest() and makeJsonArrayRequest() are instantiantaed but are kept empty for the time being:

private String jsonResponse;
    private void makeJsonObjectRequest() {
    }

    private void makeJsonArrayRequest() {
    }

The above methods need to wrap the JSONParser class.

Normally JSON responses can be of two different types. It can be either a JSON object or JSON array. If the json starts with {, it is considered to be JSON Object, while when starting with [, then it is a JSON Array. Volley provides JsonObjectRequest class to make json object request. Here we are fetching the JSON data by making a call to a url and parsing it. Finally the parsed response is appended to a string and displayed on the screen.

The JSON parsing code remains the same as implemented beforehand.

Android Volley – First Part of Two

Android volley is a networking library which was introduced to make networking calls much easier, faster without writing tons of code. By default all the volley network calls work asynchronously, so developers don’t need to worry about using asynctask anymore. Therefore this would replace the current implementation of JSON parsing.

Volley comes with lot of features. Some of them are

  1. Request queuing and prioritization
  2. Effective request cache and memory management
  3. Extensibility and customization of the library to our needs
  4. Cancelling the requests

Volley excels at RPC-type operations used to populate a UI, such as fetching a page of search results as structured data. It integrates easily with any protocol and comes out of the box with support for raw strings, images, and JSON. By providing built-in support for the features you need, Volley frees you from writing boilerplate code and allows you to concentrate on the logic that is specific to your app.

Volley is not suitable for large download or streaming operations, since Volley holds all responses in memory during parsing. For large download operations, consider using an alternative like DownloadManager.

The core Volley library is developed in the open AOSP repository at frameworks/volley and contains the main request dispatch pipeline as well as a set of commonly applicable utilities, available in the Volley “toolbox.”

The best way to maintain volley core objects and request queue is, making them global by creating a singleton class which extends the Application object. To achieve this, a  controller class was created:

public class AppController extends Application {
    public static final String TAG = AppController.class.getSimpleName();

    private RequestQueue mRequestQueue;
    private ImageLoader mImageLoader;
    private static AppController mInstance;
    @Override
    public void onCreate() {
        super.onCreate();
        mInstance = this;
    }
    public static synchronized AppController getInstance() {
        return mInstance;
    }
    public RequestQueue getRequestQueue() {
        if (mRequestQueue == null) {
            mRequestQueue = Volley.newRequestQueue(getApplicationContext());
        }
         return mRequestQueue;
    }
    public ImageLoader getImageLoader() {
        getRequestQueue();
        if (mImageLoader == null) {
            mImageLoader = new ImageLoader(this.mRequestQueue,
                    new LruBitmapCache());
        }
        return this.mImageLoader;
    }
     public <T> void addToRequestQueue(Request<T> req, String tag) {
        // set the default tag if tag is empty
        req.setTag(TextUtils.isEmpty(tag) ? TAG : tag);
        getRequestQueue().add(req);
    }
     public <T> void addToRequestQueue(Request<T> req) {
        req.setTag(TAG);
        getRequestQueue().add(req);
    }
     public void cancelPendingRequests(Object tag) {
        if (mRequestQueue != null) {
            mRequestQueue.cancelAll(tag);
        }
    }
}
Volley is shipped with powerful cache mechanism to maintain the requested cache. This saves lot of internet bandwidth and reduces user waiting time. Following is the implementation of the Volley cache methods:
Cache cache = AppController.getInstance().getRequestQueue().getCache();
Entry entry = cache.get(url);

if(entry != null){
    try{
        String data = newString(entry.data, "UTF-8");
        } catch(UnsupportedEncodingException e) {      
        e.printStackTrace();
        }
    }
}else{
}
In the same way, requests can be cancelled, deleted or permanentely switched off. Requests can also be prioritised. The priory can be Normal, Low, Immediate and High, according to which are the most important requests, in the app.
private Priority priority = Priority.HIGH;
StringRequest strReq = new StringRequest(Method.GET,Const.URL_STRING_REQ, new Response.Listener<String>() {
        @Override
        public void onResponse(String response) {
              Log.d(TAG, response.toString());
              msgResponse.setText(response.toString());
              hideProgressDialog(); 
         }
}, new Response.ErrorListener() {
         @Override
         public void onErrorResponse(VolleyError error) {
               VolleyLog.d(TAG, "Error: " + error.getMessage());
               hideProgressDialog();
         }
          @Override
          public Priority getPriority() {
                return priority;
          }
};

Using third-party API to implement Chat features

One of the main features of the app will be that of letting all users who have the app installed to communicate with each other through a group chat. In order to achieve this, a third party API had to be used. We have seen a large number of apps being developed in the recent past, all of which help users to connect with each other across different mediums. Apps like Facebook Messenger, Whatsapp, Viber and Couple, bring users of the same app together to be able to communicate with each other. One would be surprised to learn that its rather quite easy to develop a chat app of your own, with some great features which can differ from the already existing solutions on the market.

The chat application can be achieved by building a simple group chat app using Java sockets. This is not the only way to build a chat app, but using this method, one could buld a simple group chat easily. An other differnet approach to achieve such a solution would be using push notifications instead of sockets, but this is a more complex solution and not necessarily needed in our work.

This chat room will only be accessible to users who have the app installed on their mobile device or tablet. Therefore it won’t be accessible through a web application.

We will start by creating the necessary xml files for the application to be visually seen in our development environment. Primarily we will need to create the following xml files:

bg_message_from.xml

<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
 android:shape="rectangle" >
<!-- view background color -->
 <solid android:color="@color/bg_msg_from" >
 </solid>
<corners android:radius="2dp" >
 </corners>
</shape>

bg_message_you.xml

<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
 android:shape="rectangle" >
<!-- view background color -->
 <solid android:color="@color/bg_msg_you" >
 </solid>
<corners android:radius="2dp" >
 </corners>
</shape>

The above xml files contain details for the different colors of chats coming from other participants and other chats coming from the user of the app. Next are the actual chat bubbles:

left_bubble.xml

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
 android:layout_width="match_parent"
 android:layout_height="match_parent"
 android:orientation="vertical"
 android:paddingBottom="5dp"
 android:paddingTop="5dp"
 android:paddingLeft="10dp">
<TextView
 android:id="@+id/lblMsgFrom"
 android:layout_width="wrap_content"
 android:layout_height="wrap_content"
 android:textSize="12dp"
 android:textColor="@color/lblFromName"
 android:textStyle="italic"
 android:padding="5dp"/>
<TextView
 android:id="@+id/txtMsg"
 android:layout_width="wrap_content"
 android:layout_height="wrap_content"
 android:textSize="16dp"
 android:layout_marginRight="80dp"
 android:textColor="@color/title_gray"
 android:paddingLeft="10dp"
 android:paddingRight="10dp"
 android:paddingTop="5dp"
 android:paddingBottom="5dp"
 android:background="@drawable/bg_msg_from"/>
</LinearLayout>

right_bubble.xml

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
 android:layout_width="match_parent"
 android:layout_height="match_parent"
 android:gravity="right"
 android:orientation="vertical"
 android:paddingBottom="5dp"
 android:paddingRight="10dp"
 android:paddingTop="5dp" >
<TextView
 android:id="@+id/lblMsgFrom"
 android:layout_width="wrap_content"
 android:layout_height="wrap_content"
 android:padding="5dp"
 android:textColor="@color/lblFromName"
 android:textSize="12dp"
 android:textStyle="italic" />
<TextView
 android:id="@+id/txtMsg"
 android:layout_width="wrap_content"
 android:layout_height="wrap_content"
 android:layout_marginLeft="80dp"
 android:background="@drawable/bg_msg_you"
 android:paddingBottom="5dp"
 android:paddingLeft="10dp"
 android:paddingRight="10dp"
 android:paddingTop="5dp"
 android:textColor="@color/white"
 android:textSize="16dp" />
</LinearLayout>

Finally, the xml containing the complete fragment content to hold all details for the chat screen in it:

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
 xmlns:tools="http://schemas.android.com/tools"
 android:layout_width="match_parent"
 android:layout_height="match_parent"
 android:background="@drawable/tile_bg"
 android:orientation="vertical" >
<ListView
 android:id="@+id/list_view_messages"
 android:layout_width="fill_parent"
 android:layout_height="0dp"
 android:layout_weight="1"
 android:background="@null"
 android:divider="@null"
 android:transcriptMode="alwaysScroll"
 android:stackFromBottom="true">
 </ListView>
<LinearLayout
 android:id="@+id/llMsgCompose"
 android:layout_width="fill_parent"
 android:layout_height="wrap_content"
 android:background="@color/white"
 android:orientation="horizontal"
 android:weightSum="3" >
<EditText
 android:id="@+id/inputMsg"
 android:layout_width="0dp"
 android:layout_height="fill_parent"
 android:layout_weight="2"
 android:background="@color/bg_msg_input"
 android:textColor="@color/text_msg_input"
 android:paddingLeft="6dp"
 android:paddingRight="6dp"/>
<Button
 android:id="@+id/btnSend"
 android:layout_width="0dp"
 android:layout_height="wrap_content"
 android:layout_weight="1"
 android:background="@color/bg_btn_join"
 android:textColor="@color/white"
 android:text="Send" />
 </LinearLayout>
</LinearLayout>

Moving on to the java class, the following code shows the connection with the websocket. The code below will check for a new message and display it to the user:

private void sendMessageToServer(String message) {
 if (client != null && client.isConnected()) {
 client.send(message);
 }
 }
private void parseMessage(final String msg) {
try {
 JSONObject jObj = new JSONObject(msg);
 String flag = jObj.getString("flag");
 if (flag.equalsIgnoreCase(TAG_SELF)) {
String sessionId = jObj.getString("sessionId");
 utils.storeSessionId(sessionId);
Log.e(TAG, "Your session id: " + utils.getSessionId());
} else if (flag.equalsIgnoreCase(TAG_NEW)) {

 String name = jObj.getString("name");
 String message = jObj.getString("message");
 String onlineCount = jObj.getString("onlineCount");
showToast(name + message + ". Currently " + onlineCount
 + " people online!");
} else if (flag.equalsIgnoreCase(TAG_MESSAGE)) {

 String fromName = name;
 String message = jObj.getString("message");
 String sessionId = jObj.getString("sessionId");
 boolean isSelf = true;
 if (!sessionId.equals(utils.getSessionId())) {
 fromName = jObj.getString("name");
 isSelf = false;
 }
Message m = new Message(fromName, message, isSelf);
 appendMessage(m);
} else if (flag.equalsIgnoreCase(TAG_EXIT)) {

 String name = jObj.getString("name");
 String message = jObj.getString("message");
showToast(name + message);
 }
} catch (JSONException e) {
 e.printStackTrace();
 }
}
@Override
 public void onDestroy() {
 super.onDestroy();
if(client != null & client.isConnected()){
 client.disconnect();
 }
 }
 private void appendMessage(final Message m) {
 getActivity().runOnUiThread(new Runnable() {
private void showToast(final String message) {
getActivity().runOnUiThread(new Runnable() {
@Override
 public void run() {
 Toast.makeText(getActivity().getApplicationContext(), message,
 Toast.LENGTH_LONG).show();
 }
 });
}

In the same class, we need a method which loads the rootView in order to connect the xml file to the java code. The code is the following:

@Override
 public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
 View rootView = inflater.inflate(R.layout.fragment_forum, container,false);
btnSend = (Button) rootView.findViewById(R.id.btnSend);
 inputMsg = (EditText) rootView.findViewById(R.id.inputMsg);
 listViewMessages = (ListView) rootView.findViewById(R.id.list_view_messages);
utils = new Utils(getActivity().getApplicationContext());
btnSend.setOnClickListener(new View.OnClickListener() {
@Override
 public void onClick(View v) {

 sendMessageToServer(utils.getSendMessageJSON(inputMsg.getText()
 .toString()));
 inputMsg.setText("");
 }
 });
listMessages = new ArrayList<Message>();
adapter = new MessagesListAdapter(getActivity(), listMessages);
 listViewMessages.setAdapter(adapter);
 client = new WebSocket(URI.create(WsConfig.URL_WEBSOCKET
 + URLEncoder.encode(name)), new WebSocket.Listener() {
public void onConnect() {
}
public void onMessage(String message) {
 Log.d(TAG, String.format("Got string message! %s", message));
parseMessage(message);
}
public void onMessage(byte[] data) {
 Log.d(TAG, String.format("Got binary message! %s",
 bytesToHex(data)));
// Message will be in JSON format
 parseMessage(bytesToHex(data));
 }
 @Override
 public void onDisconnect(int code, String reason) {
String message = String.format(Locale.US,
 "Disconnected! Code: %d Reason: %s", code, reason);
showToast(message);
 utils.storeSessionId(null);
 }
@Override
 public void onError(Exception error) {
 Log.e(TAG, "Error! : " + error);
showToast("Error! : " + error);
 }
}, null);
client.connect();
return rootView;
 }

Below is then the construction of the message in a seperate class:

public class Message {
 private String fromName, message;
 private boolean isSelf;
public Message() {
 }
public Message(String fromName, String message, boolean isSelf) {
 this.fromName = fromName;
 this.message = message;
 this.isSelf = isSelf;
 }
public String getFromName() {
 return fromName;
 }
public void setFromName(String fromName) {
 this.fromName = fromName;
 }
public String getMessage() {
 return message;
 }
public void setMessage(String message) {
 this.message = message;
 }
public boolean isSelf() {
 return isSelf;
 }
public void setSelf(boolean isSelf) {
 this.isSelf = isSelf;
 }
}

The message list adapter takes care of the listview and the chat history, as per the following code:

public class MessagesListAdapter extends BaseAdapter {
private Context context;
 private List<Message> messagesItems;
public MessagesListAdapter(Context context, List<Message> navDrawerItems) {
 this.context = context;
 this.messagesItems = navDrawerItems;
 }
@Override
 public int getCount() {
 return messagesItems.size();
 }
@Override
 public Object getItem(int position) {
 return messagesItems.get(position);
 }
@Override
 public long getItemId(int position) {
 return position;
 }
@Override
 public View getView(int position, View convertView, ViewGroup parent) {
Message m = messagesItems.get(position);
LayoutInflater mInflater = (LayoutInflater) context
 .getSystemService(Activity.LAYOUT_INFLATER_SERVICE);
 if (messagesItems.get(position).isSelf()) {
 convertView = mInflater.inflate(R.layout.list_item_message_right,
 null);
 } else {
 // message belongs to other person, load the left aligned layout
 convertView = mInflater.inflate(R.layout.list_item_message_left,
 null);
 }
TextView lblFrom = (TextView) convertView.findViewById(R.id.lblMsgFrom);
 TextView txtMsg = (TextView) convertView.findViewById(R.id.txtMsg);
txtMsg.setText(m.getMessage());
 lblFrom.setText(m.getFromName());

return convertView;
}
}

On the server side, we need to configure the server to accept web sockets and process the messages throughout all users in the group chat. This is done by configuring a connection to the server, which has already been installed on the machine accepting the connection:

public class WSConfig {
 public static final String URL_WEBSOCKET = "ws://192.168.0.102:8080/WebMobileGroupChatServer/chat?name=";
}

Generating the Events Locations through Geo-Location

One of the requirements of the assignment was to use geo-location to give location details of where the events were going to take place. The longitudinal and latitudinal details are being parsed from the database, and are included in the location details for each event. Since Google Maps API is one of the best geo-location services and since it is easily integrated to any Android App, this particular API will be used in the project to get the location of each event.

In order to use google maps one first needs to obtain a map key through the Google API console. Assuming that a Windows PC is being used, the is the way it is done using MD5 key encryption from the jdk installation.

Through the command prompt, one needs to run the following script:

c:\<path-to-jdk-dir>\bin\keytool.exe -list -alias androiddebugkey -keystore "C:\users\<user-name>\.android\debug.keystore" -storepass android -keypass android

This would output the MD5 key similar to the one below:

keytool.exe -list -alias androiddebugkey -keystore "C:\users\mlaferla\.android\debug.keystore" -storepass android -keypass android

Using the Google API Console, the generated key needs to be uploaded and regstered to use the Google Maps API for Android devices. This associated the map we will be using to the key by giving it the MD5 fingerprint. The reason for doing so is to associate the Android application with the Maps API. Otherwise, the map cannot be rendered on the app.

The hashed code given by the Google API Console needs to then be copied as is into the manisfest to be able to call it correctly when using the app.

Below is the xml content to generate the map screen:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
 android:layout_width="fill_parent"
 android:layout_height="fill_parent"
 android:background="#F38121" >
<fragment
 android:id="@+id/mapView"
 android:name="com.google.android.gms.maps.SupportMapFragment"
 android:layout_width="match_parent"
 android:layout_height="match_parent"
 android:layout_margin="15dp"
 android:padding="15dp" />
</RelativeLayout>

And here below, the code to build the map screen:

package com.chaplaincy.jc.tibiapp;
import android.os.Bundle;
import android.support.v4.app.Fragment;
import android.support.v4.app.FragmentManager;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.SupportMapFragment;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
/**
 * Created by mlaferla on 17/02/2015.
 */
public class MapFragment extends Fragment {
private SupportMapFragment fragment;
 private GoogleMap map;
private int mapType = GoogleMap.MAP_TYPE_NORMAL;
@Override
 public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
 return inflater.inflate(R.layout.fragment_googlemap, container, false);
 }
@Override
 public void onActivityCreated(Bundle savedInstanceState) {
 super.onActivityCreated(savedInstanceState);
 FragmentManager fm = getChildFragmentManager();
 fragment = (SupportMapFragment) fm.findFragmentById(R.id.mapView);
 if (fragment == null) {
 fragment = SupportMapFragment.newInstance();
 fm.beginTransaction().replace(R.id.mapView, fragment).commit();
 }
 }
@Override
 public void onResume() {
 super.onResume();
 if (map == null) {
 map = fragment.getMap();
 map.addMarker(new MarkerOptions().position(new LatLng(37.7750, -122.4183)));
 }
 }
@Override
 public void onSaveInstanceState(Bundle outState) {
 super.onSaveInstanceState(outState);
// save the map type so when we change orientation, the mape type can be restored
 LatLng cameraLatLng = map.getCameraPosition().target;
 float cameraZoom = map.getCameraPosition().zoom;
 outState.putInt("map_type", mapType);
 outState.putDouble("lat", cameraLatLng.latitude);
 outState.putDouble("lng", cameraLatLng.longitude);
 outState.putFloat("zoom", cameraZoom);
 }
}