Blog - Page 3

« PreviousNext »

Since I discovered the Google PageSpeed Insights metrics I’ve been tweaking my blog output to maximise the score. Recent design changes mean I’m now scoring very well on these tests.

I’ve bought a Panasonic GH3. This wasn’t in the plan, and it certainly wasn’t in the budget, yet here we are.

A few months ago I described how I switched this blog from Wordpress to my own static blog generation tool called DmBlog. DmBlog was quick and dirty but worked really well, and was a big improvement in usability over Wordpress.

I also found the generated site method really useful for making design changes easily and performing invisible tweaks to the output HTML. I wanted to extend DmBlog to also cover the non-blog sections of my site.

Whilst Micro Four Thirds (MFT) has the widest selection of lenses of the mirrorless formats, it has an issue in the standard zoom, 28-70mm equivalent, category. Although there are many lenses in that range, the majority are F3.5-5.6 kit lens variations and if you want something a bit better there are two options; the Panasonic 12-35/2.8 and Olympus 12-40/2.8.

The two F2.8 lenses are credible options with different advantages, but they both cost around £800 which is too rich for my blood, and secondhand values are robust. So ultimately you have a choice of a kit lens or spending £800.

I’ve put some effort into redesigning this blog to make it more readable.

The previous design was (mostly) fine on 10" tablets or desktop devices but was pretty useless on smaller screens where devices would have to render in desktop mode, requiring the reader to zoom and scroll. On a 5" inch phone it was just about usable in landscape, but hardly pleasant. I wanted to do better.

I’ve sold my underperforming Olympus 17/1.8 lens and bought an Olympus 25/1.8. The aim was to have a shorter length micro 4/3 lens that is actually worth using.

This isn’t a stand-alone review of the Olympus 25/1.8, but more of a discussion in the context of my existing equipment.

A few months ago I wrote my X100S vs E-M5 comparison, and I largely stand by it.

But with the benefit of more time a few things have sunk in. I’ve found myself choosing the X100S for casual photography far more often than the E-M5. I’m making this choice despite the E-M5 being a better photographic tool in many ways, the image stabiliser, the superior focusing, and the better software workflow. So why is the E-M5 getting left at home, and does it imply my m4/3 setup is ‘broken’?

Having defined a scoring system for audio dynamics, I wanted to use this to test the idea that audio dynamics are getting worse over time.

The simplest test would be to plot the average score by year, but this could be misleading as my music tastes have changed over the years. These days I’m listening to less commercial guitar-driven indie and more from emerging artists across different styles.

To avoid this affecting the results I instead decided to see how the dynamics have changed for artists with a reasonably long recording career.

Last time I discussed the correlation between dynamic range and overall recording quality. One of the issues was the simple, one aspect, metric used was failing to spot some issues. To improve the use of dynamics as an indicator of overall quality I wanted to include other aspects of the analysis into the rating.

Additionally, I wanted the result to be a simple score which would have a more obvious meaning to the user. The previously used metric has dB ranges as values, which while valid for sorting only really make sense in reference to tracks with better or worse values.

This post describes my first attempt at solving this issues with a simple audio dynamics scoring system.

In my previous post I used a measurement of a track’s dynamic range as a way of deciding what quality level to encode at. This approach was based on a perceived correlation between dynamic range and overall recording quality. This post examines that correlation a bit more closely.

« PreviousNext »