Saturday, July 15, 2017

My developer toolkit 2017 (Mac)

Back in December 2010 I had blogged about the powertools I was using with windows. Soon I will do a revamp of the Windows powertools which I am using on my Windows 10 PC.This post is about the list of tools that I use for my day today activities on my Mac.

Terminal Utilities

Compared to the default terminal, I prefer to use iTem2. iTerm also integrates with Oh My Zsh which makes life lot more easier while working with terminal.Refer to the Oh my Zsh cheatsheet for more details. I love all the aliases related to Git like gst for git status, gcmsg for git commit –m and many other git commands. I use the Avit theme with Oh My Zsh which gives nice look and feel to the different commands and their outputs on the terminal.

While working with tabbed windows on the terminal it can be quite confusing and hard to remember what you were doing with which terminal window. Tabset helps to name the tabs and also give different colors to them. Display of the tab name on the right hand corner is quite helpful for me.

Code Editors

The more I use VS Code, more I am liking the features of it. It is very elegant and has a nice Themes and Plugins based ecosystem for enhancing its capabilities. No wonder more and more people (even those who hated Microsoft) are using VS Code. If you don’t want the full featured IDE of Visual Studio 2015 / 2017 but still need the better part of code editing go for VS Code.

Before I started using VS Code, I was a fan of Atom. I like its simplicity. It also integrates very well with the GitHub (it is actually created by Github). They call it the hacakble text editor for 21st century. Many people complain about slowness of Atom editor. I did not face any issues so far. May be the files I was dealing with were within the bounds of Atom.

When I moved from Windows 10 to Mac, I started using Sublime Text 3. I find it similar to Atom in many ways. Sublime is the oldest editor among VS Code, Atom & Sublime. As a result it has more features, themes and plugins.

When working with Java, I use IntelliJ Idea from Jetbrains. It is one of the best IDE I have come across (obviously after Visual Studio). I find IntelliJ much more easier to adapt coming from .Net world as compared to Eclipse.  The dark theme of IntelliJ makes me feel at home.

One of the plugin which I find very helpful in IntelliJ is the Key Promoter. It tells you how many times you have used mouse when there is a keyboard shortcut available for a command. I feel this is really needed for all the developers if you want to get better at keyboard shortcuts.

Although I prefer to work with terminal while using Git & Github repositories, I find the GitHub desktop tool handy when I want to do some GUI related work.

All the 3 text editors seem to have quite a few things in common. Especially plugins and themes are mostly ported from one editor to the other. I like the Material Theme and Monokai. Best part is all 3 editors are cross platform and work with Windows and Mac. That definitely reduces the learning curve.

Virtualization software

Docker allows to spin up lightweight containers as compared to full blown virtual images. If you don’t want to mess around with your laptop but want to try out some new tool, Docker is a good way to test it.

Not all things can be done via Docker containers.Sometimes you still need to use a virtual image. I tend to use VirtualBox for such cases.

Vagrant allows to configure virtual machines in a declarative manner. It uses VirtualBox as the provider for creating virtual machines.

General utilities

I use Ansible to automate the installation of software as much as possible on my Mac Book Pro. Refer to my post on Setup Mackbook almost at  the speed of light to know about it in more details

This is my favorite notes taking app. Works across all my devices including iPhone, iPad Pro, Mac book and windows PC. I like the simplicity of the tool. It resembles the physical notebook. The organization of notes into workbooks and pages is something I like very much when it comes to notes taking. I tried other alternatives like Evernote.

I use KeePass on Windows 10. There is a nice port of it available on Mac called MacPass. Use it if you want to store all your passwords in one place.

Dropbox is my preferred way for synching documents across devices. I also use Google Drive and Microsoft One Drive for different documents.

On Windows, I am a big fan of Open Live Writer for writing blog. Unfortunately it works only with Windows. On Mac I found Blogo. Although it is not as feature rich as Live Writer it solves the purpose.

I mostly read ebooks in PDF format. Acrobat reader allows synching of ebooks across devices using Adobe Cloud.

CheatSheet is one of the best free utility I have ever come across. It displays all the keyboard shortcuts for any application that you are currently running. You don’t need to remember each and every shortcut. Just hold the hotkey (by default the command key) for CheatSheet and you will see all the relevant keyboard shortcuts. I have started holding the Alt key on windows keyboard hoping to see keyboard shortcuts when I work on Windows 10 nowadays Smile

Spectacle is another nice little utility which allows you to resize & position the application windows with keyboard shortcuts. I also have a secondary monitor attached to my laptop. Spectacle is very helpful in moving windows across screens. Even if you don’t have multiple screens, you can still use spectacle to great effect to resize and position the windows.

f.lux is a utility which works with both Windows and Mac. It automatically adjusts the brightness of screen based on the time of the day.

Battery Related utilities

Has a very simplistic UI. Provides notifications on the battery levels. I find it useful to be notified when the battery is fully charged.

There are couple of other battery related apps that I am testing currently before picking up the one for my needs. These include coconut battery, Battery Heath, Battery Guardian.

Conclusion

The tolls that we use keep changing every year. I am sure there will be many more tools and utilities out there which would help to make our life more simpler and easier to work with machines. I would be interested in knowing such tools.

Tuesday, July 11, 2017

The ‘Yes’ command

Background

Recently I was working on a personal project involving GitHub and Travis CI. This short post is about my experience of hacking some of the options with Travis CI. To start with, Travis CI is giving free access to all public repositories on GitHub to run continuous integration. I used this to setup CI build for my Mac Dev Setup project. Once again, the initial work of Travis build definition has been done by Jeff Geerling. I am extending his goodwork to incorporate my changes.

One of the best part of Travis CI is the option to chose as Mac OS to run the CI build. You can refer to link for more details. I started off with the xcode7.3 version of the image. This is the default image at the time of writing this blog. The build was working fine with this version of them image. So I thought of upgrading the next image version of OSX with label xcode8. This build was successful without any changes.

Problem with Homebrew uninstallation

I thought that it was quite easy to just change the image version and the builds would work without any problem. Unfortunately not. I skipped the xcodee8.1 & xcode8.2 versions and tried to jump to xcode8.3 directly. The build failed with timeout. Looking at the build log, I can find out that the build was waiting for the confirmation on removal of Homebrew package and was expecting an input to the prompt in the form of y/N. Look at line number 391 in the below screenshot.

So I thought of downgrading the image version to 8.2. It was still the same. Hmm. Something had changed between the 8 version and the others. So I reverted back to the least version after 8 which was 8.1. As part of the initial setup, I was uninstalling the Homebrew package and from 8.1 image version, the installer expects a confirmation. Not sure why it doesn’t do it in the earlier versions.

I was running a script by downloading it and running a Ruby command as
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall"

Ruby does not provide an option to pass default as ‘Y’ to any of the prompts. Atleast I did not find such option using my google search skills. I started looking for ways to silently invoke the command and pass ‘Y’ as the default answer to the prompt.

‘Yes' command to the rescue

There were multiple solutions available online. But I liked the one provided by a command strangly named as yes. It can provide input as y or n to any prompt. The pipelining of commands and utilities in Unix / Linux based system helped here to pipe the yes command with the ruby script which I was using. The final command looks like
yes | ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall"

Note that default is Y for the yes command. If you wish to answer n, you can change the syntax to yes n :). It is quite ironic to use yes command and reply as no. But thats how the author of this command desired it. They could have create a complimentary no command which would respond with n. I even did a google search to check if there is such command. Unforunately it does not exist.

With the help of wonderful yes comand my build is running fine now. I don’t know if there is any better way of supplying an answer to the prompt on Travis CI. If you know lt me know via comments.

Monday, July 03, 2017

Setup MacBook almost at the speed of light

Background

I bought a new MacBook recently. It is always fascinating to setup your new machine. But it is also a pain to look for all the tools that you have on your old machine and port it to the new machine. Sometime back I started learning abount Ansible which helps to automate routine tasks. I came across a blog by Jeff Geerling who is the author of book Ansible for DevOps. Jeff and many others had used Ansible to setup their machines. I took inspiration from these guys blogs to automate the process of setting up a new MacBook Pro. Here is my experience.

Why Ansible?

Ansible is very easy to understand. It uses human readbale YAML syntax to describe the different actions which needs to be performed. Group of Ansible actions which are executed as part of a playbook are idempotent. It does not have a side effect on the setup. The same playbook can be run multiple times. Only the changes will be applied incrementally.


How did I use Ansible?

I started off by cloning the Git repository of Geerlingguy which is a good starting point. Jeff Geerling has done a very good job in terms of laying out the framework for initial set of tools. Jeff used Homebrew as package manager for installing packages. For the UI applications Homebrew Cask is used.I added some of the applications which were not exising in the original repo of Jeff Geerling.

It is very easy to get started. The repo follows the best practices from Ansible world and organizes the different topics into structure shown in th image below

Lets start with looking at some of the important files & folders from this repo.

The files directory contains additional files required for configuring specific tools. Jeff Geerling had custom options / configurations for Sublime and terminal. I added zshrc.in. The zshrc.in file is the dotfile for Oh My Zsh. We will talk about Oh My Zsh a bit later in this post.

Roles directory contains the Ansible Roles required for executing different tasks as part of the playbook. Here again, from the original repo of Jeff Geerling there were roles for managing dot files, home-brew, mas and command line tools. I added the role for managing atom packages.

Tasks folder contains list of tasks or actions which needs to performed during installation. These are organized into multiple files like ansible-setup, extra-packages-setup etc. I added a file for oh-my-zsh-setup.

The important files in the complete structure are default.config.yml and main.yml. The main.yml file is the glue that binds all the things together. This is like a main program in programming language like C# or Java. It contains references to the runtime variables, roles used and the order in which the tasks needs to be executed.
The default.config.yml file contains all the variables used by the tasks. It contains the list of tools & applications to be installed or uninstalled as part of the playbook. One of the advantage of using this approach is the applications which are installed gets moved to Applications folder as part of tasks. If we install the applications manually, sometimes we need to move them from downloads or other folder to Applications.

Apart from the applications itself, I also needed some additional libraries / tools. There were some which I was not using so I deleted those packages. Below are some of the additions / enhancements I did to meet my needs.

I made 2 major modifications to the repo of Jeff Geerling. I added the automatic configuration of Oh My Zsh and also Atom plugins & themes. Below steps were needed to make these modifications.

Setup Oh My Zsh

I like to use the Oh My Zsh as it enhances the default terminal with a better experience. It uses zsh as an alternative terminal to default terminal. Oh My ZSH is community driven framework for managing zsh configurations. It has lots of Themes & Plugins support and makes working on the terminal a really enjoyable experience. Doing a bit of Google search brought me to the GitHub repo for setting up Oh My Zsh by Remy Van Elst. I copied the zshrc.in file in the files directory. Same way I added the oh-my-zsh-setup.yml file in the tasks directory. The last step was to add an include statement to the main.yml file file to include oh-my-zsh-setup.yml file in the tasks definition.

Setup Atom plugins

Over the last few months, I had been using Atom as text editor. I used multiple Atom plugins and Themes. Atom has very good support for installing plugins using command line options. I especially like the Material UI theme which is supported by multiple editors including Atom & Sublime. I really like the minimilatic design of Atom editor.
It would be nice to have these plugins also installed as part of the machine setup. Fortunately there is an Ansible role hy Hiroaki Nakamura which allows exactly this functionality. You provide a list of Atom Themes & Plugins to this role and your machine will have all of them installed using Ansible. This is awesome. No need to go & search for plugins in the Atom UI. After the initial set of plugins, I have used the playbook for adding new ones with effortless ease.

To use the role, I added the role definition to the requirements.yml file. This file contains the list of roles whic need to be downloaded. As a pre-condition, all the roles listed here are downloaded before running any tasks. The hnakamur.atom-packages role expects a variable named atom_packages_packages. There is no difference between Themes & Plugins. I listed down all the Atom plugins & themes here. The last step was to include this role in the main.yml file.

Setup Visual Studio Code plugins & Themes

I have just started using the Visual Studio code editor. I was able to install VS Code using the default apps method from Jeff Geerling's playbook task. Similar to Atom or Sublime, VS Code has a rich support for Plugins & Themes. I found an Ansible Role by Gantsign named ansible-role-visual-studio-code. Looking at the readme file it seems to be made for Ubuntu. The role also installs VS Code editor. In my case I already have the editor installed using Homebrew Cask. I needed just the ability to install the plugins & themes.
From the code available within the repo, I found the code which is required to install the VS extension. The above role does a good job in installing VS code & extensions for multiple users of the system. Mine is a single user laptop & I did not need such functionality.

I ended up creating a file in the tasks named visual-studio-code-extensions-setup.yml. This file contains only one task of installing the extensions. The task wraps the command “code --install-extension extensionName”. The extension name is a placeholder in the above command and needs to be dynamically built. The default.config.yml defines a list of extensions in a variable named visual_studio_code_extensions. The extension name uses a specific format and it took me sometime to get hang of it. If we install the extension using VS Code IDE it works perfectly fine with just the extension name. But when we try to install the same extension using commandline, we need to prefix the publisher name. For e.g. the csharp extension is published by Microsoft. We need to provide the fully qualified name as ms-vscode.csharp.

The list of extensions can be specified as

visual_studio_code_extensions:
- steoates.autoimport
- PeterJausovec.vscode-docker
- ms-vscode.csharp

But this looks very clumsy. This is where the simplicity and flexibility of Ansible & YAML can be beneficial. We can define custom lists or dictionaries which can split the publisher & the extension name. I used this approach to define the extensions as
visual_studio_code_extensions:
- extensionName: autoimport
publisher: steoates
- extensionName: vscode-docker
publisher: PeterJausovec
- extensionName: csharp
publisher: ms-vscode

The tasks file then concatenates the publisher & the extension.

Next steps

There are still some manual steps involved in setting up the machine. As of now, I did not find a way to use Ansible to install applications from Apple App Store. I had to manually install one app Blogo which I used for writing this blog post. I am still looking for ways to automate this. There might be a way to invoke a shell command using Ansible which might allow to install App store apps. I have not tried it so far. Better way in my opinion would be to have a Ansible Role which can take a list of Apps to be installed and silently install them.

[Update]: I received a tweet from Jeff Geerling that there is mas role defined within his playbook which can be used to install apps from App Store by specifying the email & password linked to the Apple Id account. I will try this approach and update the contents accordingly.


Conclusion

At the end of the exercise, my laptop was setup and looked like below

All the applications that you see as well as additional ones which are not visible in this screen (like VS Code) were installed using Ansible playbook. It took just one single command to get these apps installed. The setup can be replicated to any other MacBook with minimal changes. Automating the installation steps have saved me much more time to do useful stuff (like writing this blog :)). You can also setup your Mac using similar steps. If you wish to do so refer to the readme file available at my Github repo. I intend to keep updating this repo with the changes that I am making to my dev environment. Feel free to contribute to this repo.

Tuesday, June 27, 2017

My Experience with Voxxed Days Singapore event

Brief history about the event

This year was the first time Voxxed Days event was happening in Singapore held on 2nd June 2017. Launched in 2015, the event is gaining popularity worldwide for its close association with the tech community. It is branded as the international conference for developers. It provides opportunity for developers to get close with world renowned speakers & industry experts. The topics covered in general at the Voxxed days events include Server side Java, java language, Cloud & Big data, Programming Languages, Architecture, Methodology etc. The Singapore event being the first, was a good opportunity for organizers to put their best foot forward and make it a grand success. Lets see if they managed to do it or not.

I was part of a small group of people from our organization who were in charge of manning the booth we had setup. Our organization was one of the platinum sponsor for the event. As a result of this I had the opportunity to visit the location one day in advance in order to setup our booth. All the other sponsors were also setting up their booths as the event was scheduled to start at 8 AM on the Friday morning.

CACIB Booth

Our group spent about couple of hours to organize the flyers into a nice goodies bag along with the organizers. We also setup the booth with all the stuff we had at our disposal. Next to our booth there was Pivotal and on the opposite lane there were Google & Redhat booths. Everyone seemed so excited about the next days events.

Panaromic view of the booths

The Keynote

The event started with a brief introduction by Alan Menant about the schedule which was split into 3 different tracks & where the rooms were located & how to access them. Welcome speech by Alan Menant

Then there was memento presented to all the sponsors of the event. And the stage was set for the rest of the day. The big room was completely full with more than 300 participants. Including speakers & organizers it was more than 350 people in the same room.Memento presented to Sponsors

Rajesh Lingappa from Redmart kicked off the proceedings with his keynote speech. He presented the approach used by Redmart in their quest to achieve the fully automated Continuous Delivery pipeline.

Remart devops

The message from his presentation was that in order to achieve high flexibility of deploying multiple times during the day to Production, we need to automate as much as possible. Along with automation testing thoroughly to have confidence in the changes being deployed is also of paramount importance.

Guillaume Laforge keynoteThe second half of the keynote was by Guillaume Laforge on IT holy wars. His was a different perspective on various aspects related to software development including programming languages (Java vs C#), IDE (Intelli J vs Eclispse), indentation (4 spaces vs Tabs) and many such facets which only programmers / developers can relate to. The message from Guillaume was that in spite of all the differences we must keep moving forward and deliver quality software. It does not matter if we use Java or C# as our programming language. As long as we address the customer needs and can solve the business problems it is ok to have differences in opinion. In the hindsight it makes sense. If everyone was using the same standards / languages / tools / approaches our life as developers would have been quite robotic & boring.

The Sessions which I attended

Following the keynote, there were 3 tracks which were running in parallel. Attendees were free to chose which session they wanted to attend and could switch in between if they did not find the content relevant to them. I attended following sessions.

Reactive Microservices on the JVM with Vert.x by Burr Sutter

Burr Sutter Reactive Microservices talkThe session started with Burr asking couple of volunteers to hold very small sensors in their hands which were sending some information related to the temperature in the room to programs running on the speakers laptop. From sensors Burr moved onto to Reactive Microservices space and the demo with some web app running a game inside the browser was played by many participants. The dynamic nature of configurations reflecting immediately on the players involved was very well demonstrated and highly appreciated by the audience. It was one of the most energetic demo I had seen for my entire career.




Cloud Native Java by Josh Long

Josh long cloud native java tlkI am relatively new to the Java world. I had heard few words of praise for Josh Long from the guys who were involved with Java development for long time. His demo was full of funny moments with very subtle sarcastic remarks about people who are still outdated with the technical advances in Java. At one point during the demo Josh posed for the photographer and brought the whole floor down with his hilarious comments. For those who know the game of cricket it was almost as if listening to Geoffrey Boycott doing a cricket commentary. When Josh said “My grandma knows how to do xyz, we are in 2017, who does abc nowadays?”, it was almost reminiscent of the way Sir Geoffrey Boycott would have said “My grandma can hit that juicy full toss for a six”. At times it looked as if Josh was overboard with his energy levels. But then with the kind of live demos that he showed he could very well afford to do that and much more. If I compare him with some of the renowned speaker from Microsoft world, Josh would be like 3 times more faster to talk and to type compared to my favorite Scott Hanselman. Within a week or so of the voxxed days event, I saw a Twitter post with both Scott & Josh together. It was like two legends from Microsoft & Java world were having a union.

Mesos vs Kubernetes vs Swarm: Fight by Christophe Furmaniak

Christphe on container orchestrationThe topic seemed interesting enough by the title. But somehow it ended up being too theoretical. Many of the people left the session after few minutes to attend other topics. I guess there were not many participants who were aware of the capabilities of the container orchestration tools being compared. Not many participants seems to be using containers in their production environments. To compare the 3 tools without having much knowledge about the subject was like doing guesswork. Also the demos which were shown by the presenter were bit late and I guess people could not relate to what exactly was happening and why. It could have been better to split the demos into smaller manageable chunks to make things more relevant.

Resilient Microservices with Kubernetes by Atamel Mete

Atamel Mete kubernetes talkThis talk was much more hands on involving deep dive type settings. The speaker used ASP.Net Core with Docker running on Linux as an example. It was nice to see .Net Core being used to demo in a conference which was dominated by people from Java world. The efforts put in by Microsoft in embracing open source and making contributions to the community are also quite visible with many presenters using Visual Studio code as their preferred editor not just for DotNet projects but also for other technologies. Overall the talk was successful in delivering a message that it is possible to use .Net on Linux environment. Docker played an important role in containerizing this demo app. Microservices & Docker seem to be going hand in hand in most of the talks. Although none of these technologies are dependent on one another they still compliment each other in building the applications in todays world.

Microservices with Kafka by Albert Laszlo and Dan Balescu

Kafk talkThis was an interesting session. The presenters shared their experience in using Apache Kafka in an enterprise application for a large bank in Europe. They shared their journey about steps taken to chose Kafka for interacting between multiple systems / applications. It was one of the most interactive session I had attended. There is no doubt that Kafka is becoming quite popular as a messaging layer not just in a Big Data world but also for use cases outside of Big Data.



What else other than the talks?

The event presented different opportunities for sponsors and attendees to extend their professional network. Companies associated with technology like Redhat, Google & Spring had opportunity to showcase their upcoming products and technologies. There were other sponsors who were promoting their services like training & placement.

CACIB boothGoogle cloud demoPivotal boothRedhat booth

Zenika boothV-244

There was a special guest Kris Howard all the way from Australia who was here to promote the YOW Conferences. She was the most active person to Tweet about the event and different talks with the Twitter handle @web_goddess 

Yow booth

Summary

Overall it was a wonderful event. Very well organized. Kudos to the organizing committee for bringing in such good speakers from different parts of the world.The quality of talks which I attended was extremely good. I am looking forward to the next one. We had almost 40 people attending from our organization. The feedback from all the attendees was very positive. The session which I did not attend but was highly appreciated by the participants was by Edson Yanaga. The session was named Curing your Domain model Anemia with effective & clean tips from the real world. The knowledge & passion of the speakers was contagious.

IMG_0794Apart from the quality of talks & speakers the organizers needs to be given a special pat for all their hard work in choosing a venue as iconic as the Marin Bay Sands.The food was one of the best I had at any such event in the last few years. Listening to all the wonderful talks, I was definitely motivated to learn the new things. I am sure every participant had their own takeaways from the sessions and the overall conference. It was a good team bonding exercise for our team as well. I hope the organizers would be able to keep up the expectation for the upcoming years. The bar has been set high and hopefully it can only go up from here on.

You can watch the recordings of all the sessions from the event on Youtube.

Below are some pictures from the event.

V-37V-38V-40V-21V-377IMG_0029IMG_0039

Sunday, March 12, 2017

Getting started with NDepend Pro 2017

Background

Back in June 2016, I had written the review of NDepend Pro. Recently NDepend upgraded their version and I was fortunate to have access to the latest version. This post is about my first experience of using it. I will use the same project about Martingale Theory Simulator, the source code for which is available on Github which was analyzed last time using the earlier version.

IDE Support

First thing you notice with the new version is the support for Visual Studio 2017 projects & solutions. I had received the version before the release of VS 2017. It is heartening to see the support for the latest & the greatest version of Visual Studio from day one.

IDESupport 

Another thing I noticed immediately after launching the VisualNDepend exe is the integration with industry standard tools like  VSTS, TeamCity, SonarCube and Reflector. Except for VSTS all the other tools were supported in the earlier version as well. This upgrade supports VSTS instead of TFS.

BuildSupport

I first ran the analysis with older version of NDepend pro 6.3 and then used the same project to analyze with the new version 2017.1. The new version was able to identify the presence of existing analysis results and warned me that due to the changes in the method for analysis many things would be marked as new even if there is no change in the source code.

Warning

 

Analysis Results

The analysis dashboard has a layout similar to the previous edition. The look and feel is similar to the previous dashboards. There is one exception though. If you look at the screenshot below you will find the technical Debt section highlighted.

Technical Debt

If you have used SonarCube in the past you will be able to relate to this. In my opinion, measuring technical debt and the effort required to go from one particular rating to another (from B to A in 23 mins in above example) is extremely useful. It gives a quantitative measure of the efforts required to address the Technical Debt for the project team. I like this feature the most. I was expecting it to be based on SQALE rating. It was infact using this method to calculate the debt.

Along with the snapshot, the dashboard also shows the trend for each of the measures used for code quality check.

Trends

The trends are always helpful in depicting the progression of code. In my experience I have seen teams start with excellent figures for all the quality measures (like code coverage, cyclomatic complexity etc.). As the project moves on, you start to see the drop in certain numbers. If the trend is visible it can help the teams to take necessary actions before things get out of hands.

Nowadays many teams are moving towards automated deployments. Continuous integration plays a very important part in automating different aspects of software factory. It would be important for tools like NDepend to support the required features which can be easily integrated into CI / CD pipelines.

In the installation folder I see that there is a console exe (NDependConsole.exe) available. I would like to try this option in future to see how it can be integrated with automated build process.

Conclusion

This post was just to get a feeling of what is available in the latest version of NDepend pro. I particularly liked the inclusion of Technical Debt indicator. I am yet to explore all the new features. I would like to explore the CI / CD related features in future posts. Since NDepend Pro 2017 provides support for VS 2017, cross platform development using .NET Core is another aspect on my mind. All those possibilities will come in future.

For the time being I find the little editions to the latest version quite useful. Hope you will also find usefulness of this wonderful tool for your real projects. Until next time Happy Programming.

 

Sunday, July 03, 2016

NDepend Pro review

Background

NDepend is a tool for performing code analysis with regards to different aspects like Code complexity, Code Coverage, static code analysis similar to the one done by StyleCop, Dependency management and many more useful features. In a typical enterprise application there are different  tools used to achieve these things. I have worked mostly with .Net framework in my prefessional experience and Microsoft Visual Studio is the default option to perform many of these things. I have used other tools like NCover for measuring code coverage. Back in 2011 I had used NDepend specifically to measure the Cyclomatic Complexity of different modules in our application.

How did I come across NDepend Pro?

Recently I was approached by NDepend developer to try the Pro version and evaluate its features. This post is about my experience of using NDepends (almost after a gap of 5 years) and see how it has evolved over the period of time.

I started with a very small codebase which I had developed for simulating the Martingale theory. The code is availabe on github. The codebase is very simple. It consists of a library which computes the amount for a particular trade based on the payout percentage. There are set of unit tests which are used to test the functions of the library. There is also a console application which acts as the client for invoking the library functions.

Analysis results

Below is the output of the analysis done using NDepend pro. Lets look at some of the details available via different tabs

Dashboard

NDepend Analysis Results

The summary breaks down into following categories on the dashboard

  1. # Lines of code
  2. # Types
  3. Comment %
  4. Method complexity
  5. Code Coverage by Tests (Allows you to import code coverage data from other tools)
  6. Third-party usage
  7. Code Rules

I find the Code Rules section personally very useful as it gives you hyperlinks to drill-down further into the details.

Dependency Graph

dependency graph

This graph gives a visual representation of the relationship between different assemblies. The best part I like is the interactivity of the graph. You can hover over the nodes and the affected nodes are dynamically highlighted using different colors. In the above example we have only 3 namespaces. But you can imagine how useful this can be in a real project when you have 100’s of classes and different namespaces involved.

There are multiple options to customize the way Depepndency Graph is represented. The default is based on the number of lines of code. But you can change it to any of the options shown below

Dependency graph options

Dependency Matrix

Dependency Matrix gives a matrix view of how different assemblies are dependent on one another. I find this feature very helpful as it gives within an instance a quick representation of the links between different assemblies in the application. The tools which I have used in the past like Visual Studio, NCover etc do not provide such feature.

Dependency Matrix

A well designed application will have good distribution of classes across different assemblies. You will also be able to see what could be the impact of replacing one thing with another. Lets take an example. Assuming you use a third party component like Infragistics in your application and for some reason you wish to replace it with something else. Using depepndency matrix you could find out which assemblies are dependent on Infragistics.

There are multiple options available via the context menu which gives in depth analysis of the code. I have not yet explored these options so far.

Metrics Heatmap

Metrics Heatmap

Heatmap is a feature which shows how classes are spread across different namespaces based on the cyclomatic complexity. The default measure of cyclomatic complexity can be changed to various other options like Il Cyclomatic Complexity, Lines of Codes, Percentage Comments etc etc.

For a small codebase like my sample Martingale theory tester, this analysis using NDepend is quite interesting. To make full use of the wonderful features the tool provides I intend to use this on a much larger codebase. I was recently refering to the CQRS Journey code from Microsoft Patterns and practices team. Let me see what I can discover from this decent size code using NDepend. I will keep the readers of this blog posted with the details in future posts.

Conclusion

I have always been a fan of code quality tools. NDepend has lot to offer in this area. I particularly liked the Dashboard and Dependency Matrix along with Dependency Graph. I have scratched just the top of the surface and I am excited to try other features offered by the tool. The feature I am interested in exploring more in future posts is the benchmarking of codebase. Based on my past experience and whatever little I have seen so far of the latest version, I would highly reccomend NDepend for code analysis. I personally find the integrated nature of the tool which provides so many aspects related to code quality in one single place. You can chose to run it as a standalone application which is what I did on this intance or you can integrate it within Visual Studio IDE. I also like the fact that it integrates nicely into the build process which is a must in todays world. Interoperability with other tools like TFS, TeamCity, SonarCube is another benefit.

I personally like the options offered to customize the default settings and configurations. The rules for example are mainly derived from what Visual Studio uses by default. You can always chose to filter the rules not relevant to your analysis. Another nice feature is the Code Query Language offered by NDepend using LINQ. This gives you a great ability to explore your code using queries.

There is so much to explore in NDepend that it is impossible to do it in one blog post. One feature which I did not cover in this post is the Queries and Rules Explorer.In my opinion it deserves a dedicated post. I will try to cover some of these features in future. Until next time Happy Programming.

Monday, January 04, 2016

Configure Standalone Spark on Windows 10

Background

Its been almost 2 years since I wrote a blog post. Hopefully the next ones will be much more frequent. This post is about my experice of setting up Spark as a standalone instance on Windows 10 64 bit machine. I got back to bit of programming after a long gap and it was quite evident that I struggledd a bit in configuring the system. Someone else coming from a .Net background and new to Java way of working might face similar difficulties tat I faced over a day to get Spark up and running.

 

What is Spark

Spark is an execution engine which is  gaining popularity due to its ability to perform in memory parallel processing. It claims to be upto 100 times more faster compared to Hadoop MapReduce processing methods. It also fits more in the distributed computing paradigm related to big data world. One of the positives of Spark is that it can be run in standalone mode without having to setup nodes in the cluster. This also means that we do not need to set up Hadoop cluster to get started with Spark. Spark is written in Scala & support Scala, Java, Python and R languages as of writing this post in January 2016. Currently it is one of the most popular projects among the different tools used as part of Hadoop ecosystem.

 

What is the problem in installing Spark in stand alone mode on Windows machine?

I started with downloading a copy of Spark distribution 1.5.2 Nov 9 2015 from the Apache website. I chose the version which is pre-built for Hadoop 2.6 and later. If you prefer you can also download the source code & build the whole package. After extracting the contents of the downloaded file, I tried running the Spark-shell command from the commnand prompt. If everything is installed successfully, we should get a Scala shell to execute our commands. Unfortunately on Windows 10 64 bit machine, Spark does not start very well. This seems to be a known issue as there are multiple resources on the internet which talk about it. 

When the Spark-shell command is executed, there are multiple errors which are reported on the console. The error which I received showed problems with creation of SqlContext. There was a big stack trace which was difficult to understand.

Personally this is one thing which I do not like about Java. In my past exxperience I always found it very difficult to debug issues as the error messages showed some error which may not be the correct source of the problem. I wish Java based tools and applications in future will be easier to deploy. In one sense it is good that it makes us aware of many of the internal things, but on the other hand sometimes you just want to install the stuff and get startedd with it without wanting to spend days configuring it.

I was referring to the Pluralsight course relted to Apache Spark fundamentals. The getting started and the installatioon module of the course was helpful in the first step to resolve the issue related to Spark. As suggested in the course, I changed the verbosity of the output for Spark from INFO to ERROR and the amount of info on the consoe reduced a lot. With this change, I was immediately able to get the error related to missing Winutils which is like a utility required specifically for Windows systems. This is reported as an issue SPARK-2356 in the Spark issue list. 

After copying the Winutils.exe file from the pluralsight course in the Spark installation’s bin folder, I was getting the permissions error for the tmp/Hive folder error. As reccommended in different online posts, I tried changing the permissions using chmod and setting it to 777. This did not seem to fix the issue. I tried running the command with administrative previlages. Still no luck.I updated the PATH environemnt variable to point to the Spark\bin directory. As suggested, I added the Spark_HOME, HADOOP_HOME to environment variables. Initially I had put the Winutils.exe file in the Spark/bin folder. I moved it out to dedicated directory named Winutils and updated the environemnt variable for HADOOP_HOME to this directory. Still no luck.

As many people had experienced the same problem with the latest version of Spark 1.5.2, I thought of trying an older version. Even in 1.5.1 I had the same issue. I went back to 1.4.2 version released in November 2014 and that seemed to create the SqlContext correctly. but the version is more than a year old, so there wass no point sticking to the outdated version. 

At this stage I was contemplating the option of getting the source code and building it from scratch. Having read in multiple posts about setting JAVA_HOME environment variable I thought of trying this apparoach. I downloaded the Java 7 SDK and created the environment variable to point to the location where jdk was installed. Even this did not solve the problem.

 

Use right version of Winutils

As a last option, I decided to download the Winutils.exe from a different source. In the downloaded contents, I got Winutils and some other dlls as well like Hadoop.dll as shown in the figure below.

Winutils with hadoop dlls

After putting these contents in the Winutils directory and running the Spark-shell command everything was in place and SqlContext was successfully created.

I am not really sure which step fixed the issue. Was it the jdk and setting of JAVA_HOME environment.Or was it the update of winutils exe along with other dll. All this setup was quite time consuming. Hope this is helpful for people trying to setup standalone instance of Spark on Windows 10 machines.

While I was trying to get Spark up & running, I found following links which might be helpful in case you face similar issues

The last one was really helpful from where I took the idea of separating Winutils exe into different folder and also to install JDK & Scala. But setting scala envirnment variables were not required as I was able to get the scala prompt without scala installation.

Conclusion

Following are the steps I followed for installing Standalone instance of Spark on Windows 10 64 bit machine

  • JDK (6 or higher version)
  • Download Spark distribution
  • Download correct version of Winutils.exe dll
  • Set Environment variables for JAVA_HOME, SPARK_HOME & HADOOP_HOME

Note : When running the chmod command to set 777 attributes for tmp/hive directory make sure to run the command prompt with Administrative privilages.

My developer toolkit 2017 (Mac)

Back in December 2010 I had blogged about the powertools I was using with windows. Soon I will do a revamp of the Windows powertools which ...