So, on the 18th of October, 2014, I’ve completed my goal of running a 100 miles in 50 days and wanted to share some data for my readers. Enjoy!
In my previous post about the 100 Miles Challenge, I shared my goal to complete a 100 mile run in 50 days. And I’m happy to reach the 50 Miles milestone on day 24, happened this Saturday! 46 Miles to go now, but the target sure looks reachable from here.
I changed a few things on my end. After running for the first couple of weeks, I found my legs weren’t feeling great. So, I decided to swap 700 metres of swimming as a replacement for 2 miles of running on some days. It takes me about 20 minutes to complete the 700 metres swim, and as a bonus, I get to take Shopoth for a swim lesson. So far, I’ve been going to Crowfoot YMCA for swimming, which is a nice 15 min walk from my home, or just a 3 minute drive. And they have open lanes pretty much any time of the day, making it quite convenient.
About running, I’m following a stepped pattern of speed on the trademill as shown in the following graph:
This pattern helps me since I can slow down to catch a breathe and then get back to speed, allowing the whole 2M run to finish under 20 minutes.
So far the act of running and swimming without a partner is not feeling weird anymore. Still early to declare victory, but I’ m genuinely feeling really excited internally. Will report back once I’m done with my 100 miles, hopefully in the next 3 weeks time.
A few people showed some interest in participating in the 100 miles challenge, including Mo Khan, and I’m hoping to see some adoption in the near future. Even if that doesn’t happen, thank you for wishing me good luck. Insipiration matters, both intrinsic and extrinsic.
Software architecture, or any design for that matter, needs to strike a fine balance between simplicity and power. Sometimes, the design needs to deliberately remove elements that’d make life better in some way but also cause a lot of frictions in other ways, ways that may not really impact the designer.
A few examples to explain this. Client side MVC frameworks on top of a server side MVC framework. Client side URL routing on top of server side URL routing. Persisting data in many different databases.
All these examples have one common theme: they offer some value in exchange of additional complexity, often time the complexity grows with time as future developments happen, eventually costing more than the value gained.
Microservices are talk of the town these days. I wanted to share my thoughts on microservices based on some experiments that we are running into at our current project.
Recently, we deployed a microservice for two-step verification feature on one of our projects. This was a strict business requirement, because having a separate server to store your 2nd-factor authorization provides additional security in case the servers hosting your primary factor are compromised.
We operate on a cloud environment, so spinning up new servers is a simple process. The source code for this whole Ruby on Rails based two-step verification service is no more than a couple hundred lines. So, in theory, it should be very easy to deploy such a service. However, it proved to be a lot of work in the end.
For example, to deploy this service, we actually had to spin a few servers for each of our staging and production environments. They also had to be load balanced for obvious reasons. They needed their own database for the business requirements, which also needed automated periodic backups. Networking and VPN related configurations as well as DNS configurations were also required. Monitoring tools had to be configured so we can get alerts in case things were about to fail. Deployment scripts had to be written for this service as well.
All in all, I’d say it took 20x the time to deploy this microservice than writing the code for it.
Really. No kidding.
Since it has been deployed, we didn’t ship changes as often to this service compared to our main project. This is what I find to be the primary benefit of this approach, since it doesn’t require as big a regression test during our releases.
However, when things go wrong, our debug efforts are harder since more infrastructural pieces are involved. Considering the additional work required and the value gain, I’m really not sure if microservices provide any real ROI.
The additional complexity of dealing with many servers as opposed to a larger app may or may not be worth it. I agree with Martin Fowler on the prerequisites of microservices. Unless, you have streamlined an automated way to provision new servers with all required parts, it may actually be best for you to keep working on the monolith. It’s not the end of the world, and you’ll have more time to spend with the family!
Weightloss is a tough chase. I’ve attempted many times and failed.
Here’s a photo of day 2, when I ran my first 5K in years.
Last year I decided to change the focus. Instead of aiming for weightloss, I started aiming for a better lifestyle. To some extent, that worked. But I can’t say it really made a big difference of any sort.
I always thought I was a team sport person and hated doing anything only by myself, so much so that, I need a partner even for swimming. After an introspection, I realized that, this is not gonna work, and things needed to change if I really wanted a better lifestyle.
In came the Treadmill. A second hand, barely used, Livestrong treadmill on a discount price of $400 (new one costs $1,700). Having this in the basement, overlooking the wall mounted TV, makes it such a big difference. Specially in a place like Calgary, where a lot of preparation is required to set foot out of the door in the winter months.
After I bought the Treadmill in October 2013, I ran my first 100 miles on it in the next 4 months, running 2 miles on each session. It felt so good every time I completed the 2 miles. The confidence found from this first 100 miles shapes my next goal.
Run 100 miles in 50 day challenge
Today is day 4. I’ve completed 9 miles or 9% so far. Hoping to report back once I’ve reached 50%.
Hope this gets you started if you are thinking about a change and like taking a challenge. If you are taking this challenge, let’s team up and keep each other motivated along the way.
Every year, I wish to run a couple of public speaking sessions/presentations/demos. This year, I had the opportunity to teach a hands-on session at CAMUG and wanted to share some lessons learned with my readers on this blog.
Thanks Terence for taking this photo.
Because this was a 3 hour session, and I believe my attention span cannot be stretched beyond 40 mins, the 3 hour session was chunked as follows:
Chunk 1 08:45 - 09:00: Played a techno music to set the tone as people are entering the room, 09 - 09:10: meet and greet, 09:10 - 09:20: present a deck of 6 slides, 09:20 - 10:00: show the basics of AngularJS in a 40 min stretch
Chunk 2 10:00 - 10:10: first coffee/snack/questions break, again with the music played on the background, 10:10 - 10:50: another 40 min strech of coding, 10:50 - 11:00: 2nd break, with the music
Chunk 3 11:00 - 11:30: last strech of coding, 11:30 - 12:00: Questions, discussions
While the chunks helped me keeping on track, I thought I could do a better job with switching between code reading and writing mini chunks. Next time I run a hands on, I’ll have mini chunks as follows:
5 min code reading, followed by 10 min code writing
Calling out a separate read vs. write time should help with the fact that some people are still typing when I moved to the next topic.
This time I ran a few rounds of practice runs using QuickTimePlayer’s screen recorder. This helped me a great deal since I was able to correct a few mistakes and got some ideas for improvements even before showing this to my colleage.
I also did a 1 hour compressed practice run with Mo and he had some great advices.
I kind of miss the fact that, I don’t have a video captured for the real session. That would help me for my next talk for sure. So, lessons learned from here is
to have a video recording of the future sessions if possible.
Lessons learned from here:
find a topic for a presentation that has a value proposition in itself, apart from just being a hello world example.
I’ve tried to write the code myself a few times ahead of the session, and always had some typos that’d break things here and there. So, I decided to have a backup of the code with the seed project, and distributed it to the attendees. This helped me quite a bit, as people could simple copy-paste segments of the code when frustrated by that missing curly brace or a sneaky syntax error.
However, during the live session, I decided to deviate a bit from the backup code to clarify some questions. While the deviation made it easy to explain certain things, it also introduced some confusions for people that were following the backup and didn’t see the exact same code on the projector.
Lessons learned from here:
If backup code is used, stick to it.
It was really good to see a full house on a morning of a summer weekend in Calgary. The audience had a really good mix of people, ranging from students to people with 35 years of experience. It was also inspiring to hear some of the feedback from them and I wish to deliver a better talk next time.
AngularJS needs to rename a lot of things and introduce higher level abstraction
But instead of just complaining and ranting, I wanted to suggest a few concrete refactorings. I think these refactorings would make my life way easier next time I’m about to introduce someone new to AngularJS:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101
I just wanted to share my 2 cents about it. I love this framework and I think it’s really good and productive once you understand the core concepts. But I strongly suggest the AngularJS team to take some time to simplify things. A refresh will save it and make it hugely successful or it’ll die like a champion fighter, who had all the right weapons but the wrong uniform.
2013 was an amazing year in many ways for my personal life. To recap, the top story of 2013 was of course the birth of our son, Shopoth. And then, we bought our first house, paid off our car-loan and ended the year with a much needed vacation, that too to Bangladesh where we spent some quality time with the families.
2013 was my year of learning Haskell. I can’t claim to be a seasoned Haskell programmer yet. But I think I learned a lot of new concepts that you only see once you’re left in the unchartered territory of a functional programming language. So, in 2014, Haskell still remains a the language I want to get better at.
I’d like to change a few things in 2014 in terms of my career. It’s been almost 8 years since I’ve been developing web apps on the job. It’s been exciting and I feel overwhelmed when I look back at how far the industry has matured over the years. However, at the same time, in 2014, I’d like to focus on my soft-skills, especially on my negotiation skills. The target is to practice negotiation before I need to negotiate with others, so I’m well prepared to share my opinion about a topic. And afterwards, do a retrospective to find out room for improvement.
In 2013, I also got a PhD admission at the University of Calgary. So far, I’ve enrolled into only a single course. But the target is to publish one paper on API evolution by the end of this summer.
2014 is the first year of my thirties. I guess, I should start checking off items from my bucket list from now on. Still debating which one to target in 2014: a) learn to fly an airplane, b) start my own side project. Will keep you posted when I’ve made my mind. Happy 2014 till then.
Oh, before I conclude, here’s some stats about my open source contribution in 2013:
- 40K+ downloads of MvcMailer
- 400+ downloads of TextHelper
- 1.5K+ downloads of streamy_csv
- 17K+ visits to my blog
In 2014, I’d like to continue my open source contribution, preferably to some well established projects.
Configuration in software provides a method to build systems that can adapt to different configurations. For example, if a website’s language and date/currency formats are configurable, then it can be configured to support multiple languages and regional formats. Configuration makes it possible to deliver such features without needing a log of change in the application source code.
However, this notion of flexibility that configuration provides can be a trap at times. I’ve a definition of configurable as follows:
A configurable must have at least two configurations.
This is another way of saying YAGNI. But I find this to be more specific than YAGNI, as it quantifies and makes it apparent.
Here are a few examples to illustrate my definition.
Custom interfaces with a single implementation.
Interfaces are often times thought as a configurable component, as a new implementation can be used in place of an old one without changing the code that uses it.
Except, if your interface only ever have one implementation, this provides a false notion of flexibility. In practice, I’ve seen for most custom interfaces, a new implementation almost always needs a change in the original interface which doesn’t really make it configurable anymore.
Default arguments in methods that are never passed a non-default value.
Default arguments are great, as they often times simplify the common case. However, if a method with a default argument is never called with a non-default value, it’s simply not worth using a default argument. Use a local variable instead.
Configuration key value pairs where there’s only one value.
Since magic numbers and hardcoded strings are bad, it’s tempting to use the configuration file to hold such values. However, if there’s only one such value, it’s probably a constant and not a configurable object.
Exhaustively validating method parameters against all possible but unused values.
If you’re writing a method that’s only gonna be called from another method in your project, you probably know what you’re passing to the method. Validating for different negative inputs to such methods provide a sense of robustness without really adding any value to it.
Hoping, the definition makes sense. Would love to hear your opinion and examples of configure me not.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Now, within the scope of the login page this code executes just fine. However, with asset pipeline, if this file is included in the application manifest, then all pages that include the manifest will execute this code on load. This is wasteful and more importantly, may result in unexpected behaviors and conflicts.
To work around this problem, when introducing asset pipeline, the code needs to be wrapped in some method that can be called to initialize it only from the login page. Here’s an example of the wrapper method:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Now that the logic is wrapped inside a method, it can be included in all the pages without causing any wasteful execution and risking unexpected outcomes or conflicts. This method can be called from within the login page as shown in the following example:
1 2 3 4 5
This is of course only a minimum change approach that’ll get asset pipelines working for an existing app. I’d recommend refactoring the code to make it testable and adding unit tests as you go.
We have a 4 year old Ruby on Rails project, and now running 3.2 with asset pipelines. It has only one manifest file. We used this simple approach to convert all existing js code and it worked great. Hope it helps when you start upgrading your assets to use the pipelines.