Christmas technology complaints 2018

It is often told that things are not what they were long time ago. That’s obviously true. And that they are getting worse – every generation says that. Things, young, relationships, you name it. But it is also often told that at the same time many things are getting better. Now, where is the truth, eh?

Planned obsolescence is actually pretty old concept but while previously it was just a nasty economic model, nowadays it’s plain short-sighted when we realize the sheer global scale of human activity and its impact on our planet. Especially the “things”, that is anything producing waste, should be built to last a bit longer that is usual today. Mobile phone for a year? Are we nuts?

I’m not going to talk about distribution of wealth which on its own is a big topic, but in overall the whole system is not sustainable. While each of us can do something about it, individuals – even many individuals – are still just part of the system. Too many powerful people still don’t care about anything else than more power and profit, however short-sighted that is. It’s not for them, not in this life anyway.

Let’s just talk about quality of current technology – and sadly, mostly about software quality.

Home laser printer case: Xerox Phaser 3020Bi

We bought this lovely little and cheap printer for couple of reasons. We don’t print much – in two years we printed perhaps 200 pages? We didn’t want to print on our old inkjet anymore as it was really expensive for us (single set of inks costs more than this Xerox laser printer). But we still want to print home now and then.

Product required some initial setup but then it worked nicely as a shared network printer via wifi, which is very convenient. It printed fast, started quick enough, it simply worked for us just fine. Maybe one thing we dislike is that when it stopped the job for missing paper it didn’t continue after adding the paper into tray. But still, as with any other family member, we could tolerate some quirks.

But then, probably after a year, it started to have problems taking the paper in. Very often it reported paper jam problems that were not real. It was kept clean, we always covered it and it was never dusty, really. Even when it reported paper-jam it printed report when demanded directly from the printer. It simply got quirky too much.

The problem is likely just with some sensor, but it may be partially software based too. Feeding problem seems to be mainly mechanical, which is strange after using it only with office paper and only after about 200 pages. I checked Xerox pages and internet, but couldn’t find any easy fix.

With a printer for 50€ you don’t really know whether it’s economical to repair it. Our current plan is to finish the current toner and then try another printer, perhaps another brand. That may be after next 3-5 years, but still – it will be waste.

So this is the story of low reliability.

Voice/chat software case: Microsoft Skype

Now let’s go to software world – we will not talk about waste but about innovation. Wannabe innovation, that is.

For months Microsoft was pushing its newer version and for months we resisted (this included virtually everyone I knew). Eventually Microsoft forced the new version upon us and it was exactly what I expected. Terrible downgrade in the name of progress. Every single message has a smiling icon at its end – now this is misleading at least, but at least it gives every message kinda happier feeling about it. Yes, I know what this icon is for, but when you get some really sad or horrible message the placement of this emoji button seems really absurd.

I wish this was the biggest problem with new Skype. The problem however is, that while previously I could see who is online and who is not (except my mum that likes to be hidden and then complains I’m not calling) with new Skype all my contacts are effectively hidden. But nobody seems to like it. I’d like to see the Skype traffic before and after the change, because at least in case of my friends it’s close to zero now.

Skype is a typical case of innovation for its own sake. I don’t care about material design. Yes, I’m using PC mostly, just to clarify it. I understand that sometimes things get a bit worse in order to get better. But Skype got much worse and Microsoft seems to be OK with it. It keeps telling us how better Skype is – so much for their self-reflection.

Sure, there is another explanation for it all. Perhaps they want to kill the product. But why would they do it in this manner? No, no, I really believe Microsoft is somehow convinced they are creating a better product. But none of my contacts agrees. And we don’t talk about it anymore. Not on Skype anyhow.

So this is the story of unwanted pseudo-innovation.

Business communicator case: Microsoft Skype for Business (AKA Lync)

Now this would be quite a useful product. Sure it has its own bugs, but it provides video conferences, it records meetings, it just works… kinda. But for whatever reason Microsoft decided that we should have no in-application control over the microphone levels, while the application can lower it anytime – even to zero.

The result is that when my colleagues complain they can’t hear me I have to go to Windows audio settings and reset the microphone level to something normal. It’s strange that even on zero it still is not absolutely muted and recently I started to use +30dB boost instead and just leave it on zero when Skype for Business insists.

Does Skype really do it? Sure it does, I tested it. Does other application do it too? Depends. Audacity for instance does not. It respects my setting and records the sound as expected. Slack does it to a degree, not sure what its settings are, but it never set the level all the way to zero.

The trouble with Skype for Business is that the level does not go up when necessary, only down.

So this is the story of little control given to us – users.

Smartphone madness

And this is the story of consumerism in its best – or worst, if you will. I’ve actually got just my third smartphone since 2010, although I used also couple of company smartphones before I bought my own dual-sim device. I’ve used only Android devices but it never was the same Android. Buttons change meaning all the time, back button is once on the right then on the left, and this all is combined with touch-based UI that is generally more limited than mouse-based ones. One example is no hover functionality like a tooltip. While mobile/touch based UIs try to compensate with other techniques, these are hardly well understood by most application writers. And there are simply too many “standards” used now, all at once. I often recall the book The Design of Everyday Things that showcases many overly smart designs – bad designs actually.

Sure I’m not a teenager effortlessly switching from one phone to another or from one application to a newer one. But I’m also not from “old generation” that can’t handle the smartphone at all. But when I see a new app, I try to utilize what I know already – not mechanically, that’s not even possible nowadays. I try long-touch, try to find some hamburger  button, try the same thing in list and detail, etc. But very often things are rather random.

And then there are hardware/sensor compromises to make the phone cheaper. Missing indication led shocked me on my second phone. On the third one it’s missing proximity sensor – so after the call it takes ages until the display lights up so I can hang up. Using display itself as “proximity sensor” results in many calls hung up accidentally. Sure I’m not buying 500+€ phones, but the ways how to omit quite important details – resulting in much worse usability – still surprise me.

Reliability, stability

In The Design of Everyday Things the author says (interpreted as I remember it) that sometimes we can’t clearly decide what is better design and in many cases we should just standardize. The thing is when people already know something it’s just as intuitive as something that can be grasped without learning.

Perhaps traditional menu based UI is not the best one although we still use menus anyway, at least contextual ones. But there are couple of trends that bother me.

First is how quickly things change. Often we can hardly assess the usefulness of the new UI because every style needs some time to get used to. I’m sure they measure things with various test, but again – often what is useful in the long term may not be obvious immediately. A/B testing can be cool when you want to maximize impressions or when you test specific custom web experience but these are all local optimizations. How does it help across applications when we don’t get to the same conclusion?

And that is the second problem – with menu based systems things converged a bit. Here and there some application tried to be bold and it was fresh. But they were still just exceptions. Nowadays everything is an exception. Everything wants to be special. Things diverge. New UIs appear often and many. We probably need to experiment more, but everybody experiments. Google, Microsoft, Facebook, you name it. So we have to learn and relearn more often.

I’m not against learning at all – I have to do it all the time in the line of my work. But perhaps we overdo it a bit, don’t we? After some level it becomes inefficient. Should you skip this or that wave of technology? What is essential and what is just a hype?

No wonder we have to suffer UI designs with possibly good ideas, but low quality overall. With such a tempo of development it’s difficult to built the UI without bugs. While the widgets are often provided by the environment it’s difficult to build applications to various devices – applications that last. And if they don’t last… should I even start using it in the first place?

Buying major appliances is still somehow safe – from my experience they last at least 8 years even if not maintained properly. (Rarely a washing machine or a dishwasher actually is.) But consumer electronics is another story – the smaller the more extreme it is.

There is more and more software in it and more and more of it is less and less stable. Security is a big concern nowadays, but how can it be built-in when often even the basic functions are half-baked? TV manufacturers say we don’t want smart TV. I disagree – I want smart TV! It doesn’t have to be super-smart – just working, really. And when I use YouTube app, I want to be able to use external keyboard to search videos instead of on screen one with remote. How hard is that? Probably harder than saying “customers are not interested”.

What next?

So here we are. We can indulge in buying more and more electronics. It is more and more powerful – that’s for sure. But it does not satisfy us long enough. And it is not for everyone either. There are people who are plain scared of anything new – mostly because they have to relearn everything to do exactly the same thing like before, for instance making calls. It seriously is a problem and you’re lucky if your (grand)parents have no problems with it at all. This problem will not go away in years, it will be decades – and who knows what problems our or younger generation will have with next waves of devices.

I’m not saying technology has to slow down. But user experience should not evolve this quickly. We can’t forget habits every two years or so. Often in order to create more natural and seamless controls we create controls that are not obvious (well, seamless, right?). We innovate way to fast on the surface, on the interfaces.

And because everybody thinks they know how to innovate, things diverge. We have to relearn things often. And we have no idea what bugs and security issues are under the cover. And – as I said at the beginning – it’s the part of our system and very difficult to change. Buying a new smartphones often is now part of our culture.

I’m not without my share of “sin” here. But I’m really waiting for culture of less. Less waste, less plastics, less toys for my kids, less sweets for Christmas – less sweets all year, really. More of all these things must displace other important “things” from our life.

But enough! It’s just after Christmas, early morning here, just listening to Bing Crosby & David Bowie – “The Little Drummer Boy (Peace On Earth)”… so late Merry Christmas and all good for the next year. More love, joy, fun and less of those less important things.


Classpath too long… with Spring Boot and Gradle

Java applications get more and more complex and we rely on more libraries than before. But command lines have some length limits and eventually you can get into troubles if your classpath gets too long. There are ways how to dodge the problem for a while – like having your libraries on shorter paths. Neither Gradle nor Maven are helping it with their repository formats. But this is still just a pseudo-solution.

Edit (Oct 29): I added section about the plugin that makes the whole solution automatic would work if it treated paths as URLs.

When you suddenly can’t run the application

On Windows 10 we hit the wall with the command line of our Spring Boot application going over 32 KB. On Linux this limit is configurable and in general much higher, but there is often some hard limit for a single argument – and classpath is just that, a single argument. The question is whether we really want to blow our command line with all those JARs or whether we can do better.

Before we get there, though, let’s propose some other solutions:

  • Shorten the common path (as mentioned before). E.g. copy all your dependencies into something like c:\lib and make those JARs your classpath.
  • With all the JARs in a single place, you may actually use Java 6+ feature -cp “lib/*”. That is, wildcard classpath using * (not *.jar!) and quotes. This is not a shell wildcard (that would just expand it into a long command line again) but actual feature of java command (here docs from version 7). This is actually quite usable and it also scales – but you have to copy the JARs.
  • Perhaps you want to use environment variable CLASSPATH instead? This does not work, the limit is 32 KB as well. So no-solution.
  • You can also extract all the JARs into a single tree and then repackage as a single JAR. This also scales well, but involves a lot of disk operations. Also, because first class appearance is used, you have to extract the JARs in classpath order without overrides (or in reverse with overrides).

From all these options I like the second one the best. But there must be some better one, or not?

JAR with Class-Path in its manifest

I’m sure you know the old guy META-INF/MANIFEST.MF that contains meta-information about the JAR. It can also contain the classpath which will be added to the initial one on the command line. Let’s say the MANIFEST.MF in some my-cp.jar contains a line like this:

Class-Path: other1.jar other2.jar

If you run the application with java -cp my-cp.jar MainClass it will search for that MainClass (and other needed ones) in both “other” JARs mentioned in the manifest. Now I recommend you to experiment with this feature a bit and perhaps Google around it because it seems easy, but it has couple of catches:

  • The paths can be relative. Typically, you have your app.jar with classpath declared in manifest and deliver some ZIP with all the dependencies with known relative paths from your app.jar. You can still run your application with java -cp app.jar MainClass, or, even better, java -jar app.jar with Main-Class declared in the manifest as well.
  • The paths can also be absolute, but then you need to start with a slash (natural on Linux, not so on Windows). On Windows it can be any slash actually, I guess it works the same on Linux (compile once, run anywhere?).
  • If the path is a directory (like exploded JAR) it has to end with a slash too.
  • And with spaces you get into some escaping troubles… but by that time you’d probably figure out that the paths are not paths (as in -classpath argument) but in fact URLs.
  • Now throw in the specifics of the format of MANIFEST.MF like max line length of 72 chars, continuation lines with leading space, CRLF, … Oh, and if you try to do your Manifest manually, don’t forget to add one empty line – or, in other words, don’t forget to terminate the last line with CRLF as well. (Talking about line separators and line terminators can get very confusing.)

Quickly you wish you had a tool that does this all for you. And luckily you do.

Gradle to the rescue

We actually had also specific needs for our classpath. We ran bootRun task with the test classes as well for development reasons. In the end, bootRun is not used for anything but for development, right?

Adding test runtime classpath to the total classpath “helped” us to go over that command-line limit too. But we still needed it. So instead of just having classpath = sourceSets.test.runtimeClasspath in the bootRun section we needed to prepare the classpath JAR first. For that I created classpathJar task like so:

task classpathJar(type: Jar) {
  inputs.files sourceSets.test.runtimeClasspath

  archiveName = "runboot-classpath.jar"
  doFirst {
    // If run in configuration phase, some artifacts may not exist yet (after clean)
    // and File.toURI can’t figure out what is directory to add the critical trailing slash.
    manifest {
      def classpath = sourceSets.test.runtimeClasspath.files
      attributes "Class-Path": classpath.collect {f -> f.toURI().toString()}.join(" ")

This code requires couple of notes, although some of it is already in the comments:

  • We need to treat the files (components of the classpath) as URLs and join them with spaces.
  • To do that properly, all the components of the classpath must exist at the time of processing.
  • Because after clean at the time of the task configuration (see Gradle’s Build Lifecycle) some components don’t exist yet, we need to set the classpath in the task execution phase. What may not exist yet? JARs for other projects/modules of our application or classes dirs for the current project. Important stuff, obviously. (If you run into seemingly illogical class not found problems, this may be the culprit.)
  • Another reason why these artifacts may not exist is missing proper dependencies. That’s why I mention all three concatenated components of the classpath in the inputs.files declaration.

EDIT: For the first day of this post I’ve got dependsOn instead of inputs.files. It was a mistake causing unreliable task execution when something upstream was changed. Sorry for that. (I am, I suffered.)

And that’s it

Now we need to just mention this JAR in the bootRun section:

bootRun {
  classpath = classpathJar.outputs.files
  //…other settings, like...
  main = appMainClass // used to specify alternative "devel" main from test classpath

I’m pretty sure we can do this in other build tools and we can make some plugin for it too. It would probably be possible with some doFirst directly in the bootRun, but I didn’t want to mix it there.

But again, this nicely shows that Gradle lets you do what you need to do without much fuss. It constantly shouts: “Yes, you can!” And I like that.

But wait, there’s a plugin for it!

EDIT: October 28th, 2018

Weeks after I resolved the problem myself I decided to re-search the internet about it with fresh head again. Through this issue I found gradle-utils-plugin and its recently updated clone. I decided to use the latter and when I removed my original solution (bootRun task in my case has classpath = sourceSets.test.runtimeClasspath) all I need to do is add plugin declaration at the beginning of the build script:

plugins {
    // solves the problem with long classpath using JAR instead of classpath on command line
    id "ua.eshepelyuk.ManifestClasspath" version "1.0.0"

(EDIT: November 2nd) However – there’s a twist. This plugin does not work with spaces – that is, it’s not tested and it’s broken. It does not treat paths as URLs, which is critical in case of class-path in Manifest. Repository does not offer “Issues”, so we’re back to our own solution.

Self-extracting install shell script with Gradle

My road to Gradle was much longer than I wanted. Now I use it on the project at the company and I definitely don’t want to go back. Sure, I got used to Maven and we got familiar – although I never loved it (often on the contrary). I’m not sure I love Gradle (yet), but I definitely feel empowered with it. It’s my responsibility to factor my builds properly, but I can always rely on the fact that I CAN do the stuff. And that’s very refreshing.


Gradle is not easier to learn that Maven I guess. I read Building and Testing with Gradle when it was easily downloadable from Gradle’s books page (not sure what happened to it but you probably still can get it somehow). The trouble with Gradle is that sometimes the DSL changes a bit – and your best bet is to know how the DSL relates to the API. The core concepts are more important and more stable than DLS and some superficial idioms.

Does it mean you have to invest heavily in Gradle? Well, not heavily, but you don’t want to merely scratch the surface if you want to crack why some StackOverflow solutions from 2012 don’t work anymore out-of-the-box. I’m reading Gradle in Action now, nearly finished, and I can just say – it was another good time investment.

My problem

I wanted to put together couple of binary artefacts and make self-extracting shell script from them. This, basically, is just a zip and a cat command with some head script. Zip gets all the binary artefacts together and cat joins the with this ZIP file – both separated by some clear separator. I used this “technology” back in the noughties, and even then it was old already.

How can such an head script look like? Depends on many things. Do you need to unzip into a temporary directory and run some installer from there? Is unzipping itself the installation? Let’s just focus on the “stage separation”, because the rest clearly depends on your specific needs. (This time I used this article for my “head” script, but there are probably many ways how to unzip the “tail” of the file. Also, in the article TGZ was used, I went for ZIP as people around me are more familiar with that one.)

set -eu

# temporary dir? target installation dir?

echo "Extracting (AKA installing)..."
ARCHIVE=`awk '/^__ARCHIVE_BELOW__/ {print NR + 1; exit 0; }' $0`
tail -n+$ARCHIVE $0 >
unzip -q -d $EXTRACT_TO
rm -f

# the rest is custom, but exit 0 is needed before the separator
exit 0


Now, depending on the way how you connect both files you may or may not need empty line under the separator.

To be more precise and complicated at the same time, there may need to be LF (\n or ASCII 10) at the end of the separator line or not. Beware of differences in “last empty line” meaning in various editors (Windows vs Linux), e.g. vim by defaults expects line terminator at the end of the file, but does not show empty line, while Windows editors typically do (see the explanation).

Concatenation… with Gradle (or Ant?)

Using cat command is easy (and that one requires line-feed after separator). But I don’t want to script it this time. I want to write it in Gradle. And Gradle gives me multiple superpowers. One is called Groovy (or Kotlin, if you like, but I’m not there yet). The other is called Ant. (Ant? Seriously?! Yes, seriously.)

Now I’m not claiming that built-in Ant is the best solution for this particular problem (as we will see), but Ant already has a task called concat. BTW: Ant’s task is just a step or action you can execute, if you thing Gradle-tasks think Ant-targets, and Ant’s targets are not of our concern here.

Ant provides many actions out of the box and all you need to do is use “ant.” to get to Gradle’s AntBuilder. But before we try that, let’s try something straightforward, because if you can access file, you can access its content too. One option is to use File’s text property and something like in this answer. Groovy script looks like this:

apply plugin: 'base' // to support clean task

task createArchive(type: Zip) {
    archiveName ''
    from 'src/content'

task createInstallerRaw(dependsOn: createArchive) {
  doLast {
    file("${buildDir}/").text =
      file('src/').text + createArchive.outputs.files.singleFile.text

OK, so let’s try it:

./gradlew clean createInstallerRaw
# it does it’s stuff
diff build/distributions/
# this prints out:
Exctracting (AKA installing)...
Binary files build/distributions/ and differ

Eh, the last line is definitely something we don’t want to see. I used couple of empty files in src/content, but with realistic content you’d also see something like:

  error:  invalid compressed data to inflate out/Hot Space/01 - Staying Power.mp3
out/Hot Space/01 - Staying Power.mp3  bad CRC 00000000  (should be 9faa50ed)

Let’s get binary

File.text is for strings, not for grown-ups. Let’s do it better. We may try the bytes property, perhaps joining the byte arrays and eventually I ended up with something like:

task createInstallerRawBinary(dependsOn: createArchive) {
  doLast {
    file("${buildDir}/").withOutputStream {
      it.write file('src/').bytes
      it.write createArchive.outputs.files.singleFile.bytes

Now this looks better:

./gradlew clean createInstallerRawBinary
# it does it’s stuff
diff build/distributions/

And the diff says nothing. And even Hot Space mp3 files play back flawlessly (well, it’s not FLAC, I know). But wait – let’s try no-op build:

./gradlew createInstallerRawBinary
2 actionable tasks: 1 executed, 1 up-to-date

See that 1 executed in the output? This build zips the stuff again and again. It works, but it definitely is not right. It’s not Gradlish enough.

Inputs/outputs please!

Gradle tasks have inputs and outputs properties that declaratively specify what the task needs and what it produces. There is nothing that prevents you from using more than you declare, but then you break your own contract. This mechanism is very flexible as it allows Gradle to check what needs to be run and what can be skipped. Let’s use it:

task createInstallerWithInOuts {
  inputs.files 'src/', createArchive
  outputs.file "${buildDir}/"

  doLast {
    outputs.files.singleFile.withOutputStream { outStream ->
      inputs.files.each {
        outStream.write it.bytes

Couple of points here:

  • It’s clear what code configures the task (first lines declaring inputs/outputs) and what is task action (closure after doLast). You should know basics about Gradle’s build lifecycle.
  • With both inputs and outputs declared, we can use them without any need to duplicate the file names. We foreshadowed this in the previous task already when we used createArchive.outputs.files.singleFile… instead of “${buildDir}/distributions/”. This works its magic when you change the archiveName in the createArchive task – you don’t have to do anything in downstream tasks.
  • No dependsOn is necessary here, just mentioning the createArchive task as input (Gradle reads it as “outputs from createArchive task”, of course) adds the implicit, but quite clear, dependency.
  • With inputs.files we can as well try to iterate over them. Here I chose default it for the inner closure and had to name the parameter outStream.

Does it fix our no-op build? Sure it does – just try to run it twice yourself (without clean of course).

Where is that Ant?

No, I didn’t forget Ant, but I wanted to use some Groovy before we get to it. I actually didn’t measure which is better, for archives in tens of megabytes it doesn’t really matter. What does matter is that Ant clearly says “concat”:

task createInstallerAntConcat {
  inputs.files 'src/', createArchive
  outputs.file "${buildDir}/"

  doLast {
    // You definitely want binary=true if you append ZIP, otherwise expect corruptions
    ant.concat(destfile: outputs.files.singleFile, binary: true) {
      // ...or mulitple single-file filesets
      inputs.files.each { file ->
        fileset(file: relativePath(file))

This uses Ant task concat – it concatenates files mentioned in nested fileset. This is equivalent Ant snippet:

<concat destfile="${build.dir}/" binary="yes">
  <fileset file="${src.dir}/"/>
  <fileset file="${build.dir}/distributions/"/>

It’s imperative to set the binary flag to true (default false), as we work with binary content (ZIP). Using single-file filesets assures the order of concatenation, if we used something like (in the doLast block)…

ant.concat(destfile: outputs.files.singleFile, binary: true) {
  fileset(dir: projectDir.getPath()) {
    inputs.files.each { file ->
      include(name: relativePath(file))

…we may get lucky and get the right result, but just as likely the ZIP will be first. The point is, fileset does not represent files in the order of nested includes.

We may try filelist instead. Instead of include elements it uses file elements. So let’s do it:

ant.concat(destfile: outputs.files.singleFile, binary: true) {
  filelist(dir: projectDir.getPath()) {
    inputs.files.each { file ->
      file(name: relativePath(file))

If we run this task the build fails on runtime error (during the execution phase):

* What went wrong:
Execution failed for task ':createInstallerAntConcatFilelistBadFile'.
> No signature of method: is applicable for argument types: (java.util.LinkedHashMap) values: [[name:src\]]
  Possible solutions: wait(), any(), wait(long), each(groovy.lang.Closure), any(groovy.lang.Closure), list()

Hm, file(…) tried to create new, not Ant’s file element. In other words, it did the same thing like anywhere else in the Gradle script, we’ve already used file(…) construct. But it doesn’t like maps and, most importantly, is not what we want here.

What worked for include previously -although the build didn’t do what we wanted for other reasons – does not work here. We need to tell the Gradle explicitly we want to use Ant – and all we need to do is to use ant.files(…).

Wrapping it up

Now when I’m trying it I have to say I’m glad I learned more about Gradle-Ant integration, but I’ll just use one of the non-ant solutions. It seems that ant.concat is considerably slower.

In any case it’s good when you understand Gradle build phases (lifecycle) and you know how to specify task inputs/outputs.

When working with files, it’s always important to realize whether you work with texts or binaries, whether it matters, how it’s supported, etc. It’s important to know if/how your solution supports order of the files when it matters.

Lastly – when working with shell scripts it’s also important to assure they use the right kind of line terminators. With Git’s typically automatic line breaks you can’t just pack a shell script with CRLF and run it on Linux – this typically results in a rather confusing error that /bin/bash is not the right interpreter. Using editor in some binary mode helps to discover the problem (e.g. vi -b But that is not Gradle topic anymore.

I like the flexibility Gradle provides to me. It pays off to learn its basics and to know how to work with its documentation and API. But with that you mostly get your rewards.

Samsung UE50MU6170 review

So I’ve bought a new TV (it’s actually UE50MU6172 in our country, but it doesn’t matter) after roughly 9 years to upgrade our 32” screen to 50”. And it’s Samsung again. Why? Because I saw it, tried it and it didn’t dim the screen the way I hate. Little did I know that it does not dim the screen only for PC input.


Let’s start with the good things.

  • It’s big. 50” is enough for me, my family and the room it’s in.
  • The remote is cool and works well. I even managed to set it up to control my Motorola cable TV set-top box. It may be for everyone, but as I don’t like new things easily, I like this one a lot.
  • Good price. Roughly 500€ is acceptable for me for this performance.
  • 4K. Not sure when I use it, but why not for that price.
  • I like the menu. Probably not the best possible thing, but way improved over their previous Smart solutions.
  • Fancy thing – but it has voice control that somewhat works. It switches between inputs, can start Settings, etc.
  • Picture is fine… often.

But I kinda expect TV to work, right? There is also one thing I’d expect. Samsung claims it has stunning picture and considers “dimming” to be positive. So let’s get to it.

That stupid dimming!

I bought this one because I believed it does not dim – or I can switch it off somehow. It did not dim when I tested it. What exactly am I talking about, you ask?

When the scene is dark the whole backlight goes darker too – supposedly to provide you with blacker black. Nowadays, and for many years already, Samsung prefers darker black and does not care that white is not white anymore, it’s mere grey. In normal daylight it often is not readable (when it’s text) or the scene is not discernible anymore (when it’s something happening in the night, for instance).

Many people complain about it, just Google “samsung disable dimming” or similar searches. But the situation is complicated as the dimming can be caused by various ECO features (I’ve got that covered already), how the input is treated, picture mode/style, various picture “enhancements” (mostly off is better than on), etc.

What is striking, however, is that even on official forums various Samsung moderators and officials act like they don’t know what we’re talking about. Now, I’ve got two theories for that – and I believe both are true at the same time.

Sometimes they may know what we mean by “dimming”, but they really don’t know what kind of dimming it is (the reason for it). So they have to investigate and guide you through dozens of settings that sometimes help you and sometimes not.

Other times, I think they know what we mean exactly, but they know that their engineering screwed big time and they have to cover for them, as there is no single fool-proof way how to disable it.

Or they really are clueless – I don’t know.

Now there are some hacks for that but these mostly don’t work for your model, or they are buried in some service menu that voids your warranty or what – and perhaps still may not work for your model.

Instead of guessing what mode (Gaming? PC? Movie?!) I need to use to disable that dimming there should be easy to understand setting related to “dimming” or “dynamic contrast” or something. For years Samsung has been ignoring annoyed voices on this topic. Sure, my mum will not complain, sure, many people don’t know that it doesn’t have to be this way. They don’t know that source is OK, that the telly is the problem. But there is informed minority that complains and is ignored.

Other annoying stuff

This will be probably TV for the next 10+ years and I hope it will serve well in most cases, but there are more little annoyances I noticed after just a couple of hours of usage.

  • It does not have 3.5mm audio jack. It hardly has any easily usable “legacy” audio output. I need to use headphones with TV in some situations – and for this TV I had to buy new bluetooth ones. Luckily, I learnt about it in advance, but I’d not expect it at all.
  • I experimented with the browser (app called Internet), it seems to be OK, but I don’t know how to toggle full-screen on it. Sites can do it when they are built that way, but I can’t do it.
  • Today (the second day we have it) it started to quit from settings back to Home (bar at the bottom) immediately after I choose Settings. If I manage to choose/move in the settings quickly (not easy, the UI is not very slow, but it’s not really snappy) it stays in settings. But this “auto-quit” is really annoying and it didn’t do that earlier this day and there is no obvious reason why it started. I plugged wireless keyboard with USB dongle in the meantime, but it does it even after I turn it off and unplug the USB. Even after TV off/on. I can’t find anything about this issue online.
  • YouTube app does not support keyboard – still. I don’t know who produces it (YouTube/Google? Samsung?), but it truly is terrible for searching stuff. Luckily, one can use Internet app (browser) and have normal YouTube experience there. With keyboard on your lap it feels nearly like desktop.

In overall, Samsung promotes features and new things more than it cares about usability or quality. Usability got better since their Bluray player I bought 6 years ago, for sure, but it’s not there.

Rating? 3/5

Could be more, but I really feel cheated when cheap dynamic contrast makes some scenes actually worse. Hard to see, really. I don’t want anti-features. Otherwise it’s OK TV and most of the time I’ll be probably happy. But I know there will be brightness transitions when I’ll cringe. And it will be way more often than rarely. And I still don’t know how the support will help with virtually inaccessible Settings menu.

Oh… we still really suck at creating software. And it appears in more and more devices.

EDIT day later: Unplug the TV for some time. Some settings are lost (inputs), but no big deal and Settings pops up and stays there as expected.

Should you buy Samsung?

I don’t know really. My old 32” Samsung had great picture that did not change white to gray every time the scene went mostly black/dark. For years Samsung offers are better technologies spoiled with stupid dynamic contrast solution – which I believe is more or less software bug.

Friend of mine told me: “Give LG a chance! They have superb webOS, check it out!” I didn’t listen. Now I have to just read on forums that LG has dimming too, but with option to switch it off. Perhaps LG knows better.

So, before you buy Samsung, check out LG… I guess. Or anything else for that matter.

EDIT: 12 days later

Settings bug appeared again, and it seems like TV has some issues with random commands even when I completely remove the batteries from the remote (actually I removed batteries from all my remotes to double-check).

I wasn’t sure whether it’s something with older Samsung Blu-ray player BD-E6100 but today when I wanted to play DVD there it started to blink with “Not Available” which it does when I press RC button that has no function (currently). Later in menu it acted like button down is pressed. It was stuck in this “mode”, even when I disabled all my remotes. It got better after the first press of some action on TV remote, but – strangely enough – not on player’s remote. I could even initiate this loop when I used TV remote buttons Home and Back.

Eventually I figured out that somehow TV and player talk to each other in an infinite cycle. I switched off Anynet+ HDMI-CEC setting on Blu-ray player and – voila – problem gone. Together with option to control the playback with a single RC (not a problem for me really).

However, Settings falling back to Home immediately did not disappear and this time TV is to blame. Even with all the devices off it does it. I can simulate it with keyboard with dead RC without batteries. Removing keyboard’s USB dongle does not fix it, although I don’t know whether the TV doesn’t go crazy because of it – but it shouldn’t.

So to sum it up. I’ve got older, but still not to be replaced, Blu-ray player from Samsung and TV from Samsung – and they don’t go well together. Both devices are updated. Even TV by itself isn’t that reliable when it comes to its operating system. So far I’m not going back to the shop with that monster, as I can work around both issues – but it’s annoying. It shows how complexity makes quality harder. And it seems Samsung is not ready for it.

The Sins of (PC) Gaming Industry

It shouldn’t have sounded like “The Sins of Solar Empire” originally. Especially because the Sins are nice example how games should be made – proving that they can be successful even without anti-piracy protection.

This is not a review of a couple of games. It is a review of ugly things even good games often don’t avoid. I don’t know why – as often it’s completely avoidable. In other cases it requires some effort, I’m aware of that. And yes, I’m talking about PC games. So this is kinda minority report.

Case 1: Mass Effect 2

I’ve just finished first Mass Effect after a long time, going on to ME2 to prep myself up for Mass Effect 3 I decided to finally buy. I haven’t because it wasn’t available on Steam, however as I have got Origin after all (Sims 2 for free was enticing) I can now proceed to the final part of the trilogy.

I have hardly anything against the first ME. Sure it wasn’t perfect and it had some bugs even casual player could encounter – but it was a solid game. ME2 has many improvements, game is more varied, more story, more variety, everything. But then there are these stupid things developers could have avoided.

Unskippable videos are the first sin. This applies to unskippable intro pictures/animations that often remind me the difference between original and pirated DVD (yes, pirated video is relevant from the second 1). Why do they annoy us with this? Over and over again? It’s as interesting as cookies warning – though here stupid legislation is to be blamed.

Related thing is that sometimes they can be skipped – but the key is totally unexpected (not Escape, but Enter like in ME1?). Sure there is a question whether any key should interrupt the video or non-interactive sequence – but at least some should.

When you want to start a new game of ME2 you enjoy the video and sequences the first time, but not necessarily the second one within a day (bug reasons). Luckily it keeps running on the background so I could write this post up in the meantime.

Hints using original keymap instead of redefined. This is a minor sin, but can be confusing a lot. This requires some effort, but I think it’s minimal and worth it. Otherwise it looks cheap and sloppy – which ME2 overall isn’t.

Inconsistent escape/back compared to ME1. When I used Esc in galaxy map it pulled me up one level. ME2 gets out of the map immediately. To go back you have to use the button on screen. Small thing. Frustrating.

Console-like controls on PC. I understand this, but it has consequences. Now a single button means “run”, “cover”, “jump across” and even “use”. That’s quite an overload. Often I want to run somewhere and I jump across sideways instead.

Lack of some keybinds. While in ME1 you could customize virtually all actions in ME2 you can’t select weapons directly. You can only go to next/previous weapon. Or just use HUD pause to do so because in real-time it’s frustrating and slow.

I’m quite surprised how playable the game still is in spite of this. I like playing the game.

Case 2: Witcher 2

Again – well known game and a good one too. But compared to Witcher 1 it suffered from couple of striking omissions.

Control customization out of the game: Lack of in-game controls customization is the major one. While I still can customize the keybinding I have to do so in a separate program when the game is not running. I don’t have the game installed now, so I don’t remember whether (or how much) this combines with “unskippable video” sin, but even without it it’s a lot of time until you nail your favourite binding. Sometimes I was wondering whether the suffering was better or worse than no customization at all.

Needless to say this game was also more consolized than the first one. But that’s a trend we probably can’t fight.

Case 3: Empire: Total War

This is another game that allows control configuration but it has its own stupid twist that couldn’t pass QA guys (or they were not listened). When you choose a key that is already bound to something the game says so… and you have to find where and rebind that other action to some other key.

So typically you probably first rebind a lot of stuff to some unused keys so you can later freely rebind the actions to the keys you really wanted. Do I need to suggest that obvious improvement here? Just unbind the other key! Or exchange the binding – although this is programmers trying to be unnecessarily smart.

Talking about that latter (suboptimal) option I now recalled Mass Effect 1 which exchanged bindings like this. By some accident it so happened that I had E key bound both to forward and something else at the same time. And I couldn’t get rid of that binding! There was no obvious way how to disable the binding and anytime I tried to replace it with another key it moved E into the secondary option. Now thinking about it, I haven’t tried to replace the secondary option as well, but the whole idea of it was ridiculous. (Secondary binding is actually neat, not that I use it that much.)

Just override what I set and unbind it from original action.

(Just in case you’re asking why E is forward – know that ESDF is superior. Hands down.)

And other cases

Other serious sins are examples of lousily and cheaply localized games with terrible translations and no option to switch both sound and text to original (mostly English). This also often affects patching. Typically original patches break something in the translated version and translated patches are not available.

I just hope many of my grudges are not relevant anymore. I have to admit I originally started this post in 2012 and I’m generally not buying 50+ bucks brand new titles anymore.

How should it look like?

I liked Witcher 1 for instance. Even though I wasn’t RPG guy – I mostly prefered first-person shooters or real-time strategies then – I really liked the game. Even though these “over-the-shoulder” games seem more clumsy compared to FPS genre the first Witcher drew me in and kept me there for a long time. It was a fine PC game.

When Epic pretty much first failed with Unreal Tournament 3 (not with its Unreal Engine though) it tried to redeem itself with UT3 Black Edition. Original UT3 had great reviews but many negative user’s reviews. UT3 Black was a bit too little too late, but it was a nice thing to do and they at least showed they cared about UT brand after all. BTW: Now Epic is making new Unreal Tournament which will be available for free. I’m curious how that plays out but it’s interesting for sure.

I’ve already mentioned Sins of a Solar Empire. It was a successful title even though it didn’t have DRM. The guys who made the game said it simply (not exact quotation though): “We’re making game for people who buy it. Our customers don’t want DRM so we don’t put it in.” This was a fresh perspective in a world where DRM systems go so far that they intentionally harm your computer system. For many years I ignored Ubisoft games for their DRM even though I wanted to buy couple of their games.

There are also other nice examples in gaming industry, examples where you see that games are true passion for someone – GOG or Good Old Games. Originally these guys prepared really old games from DOS times so that they work on modern systems with current OS. Games like Doom or Descent. And GOG too has a fair DRM-free approach.

With this I swerved from smaller (and some bigger) annoyances to a topic much more serious. Talking about GOG, they have a nice video about their optional client GOG Galaxy that pretty much sums it all up.

But why not talk about DRM? It’s perhaps the biggest sin of the industry. Spending millions on something we don’t want. Sure I played cracked game when I was younger (and with little to no money). I’m not exactly proud of it. Now I’m sure I paid back many times over. But I choose where my money go. And good will pays back.

Pre-christmas technology review 2017

It’s been two years since my last technology overview and I feel the need for another one. Especially after otherwise pretty weak blog-year.

Notebook HP Stream 14” with 32GB eMMC (“Snow White”) (verdict: 2/5)

I got this for my son to teach him some HTML or similar entry-level computer stuff. He loves doing it. However, even nearly bare pre-installed Windows is not able to update itself (Anniversary Update I guess) with over 7GB free disk. And there is no way to make it any free-er. At least not without some hacking. There is just a single local user (admin) because any other user takes additional gigabyte(s), especially with Microsoft account, OneDrive, etc. There is only a couple of apps, I’ve got Visual Studio Code (175 MB) and K-Lite Codec Pack (111 MB).

That’s it. Virtually no data except some pretty tiny HTML files we work on and some pictures (few megas). Anything else is beyond my control. I tried compressing OS. I wanted to switch to Linux, but I’ve read eMMCs are not well supported there (in general). The disk is also incredibly slow – don’t let the fact it’s a flash drive fool you. It’s not SSD. Simply said, it’s something slow connected over a super-slow interface.

I believe there are devices where it’s acceptable, but please, let’s make it official. Notebook with eMMC is just a stupid idea. Especially with 32 gigs and Windows on top of it – that should be considered a crime. Or a scam at least.

Without the problem combining Windows with eMMC, I’d take the machine as OK for the price. It’s light, thin, looks good, runs long on battery (5-8 hours no problem), bright enough display. But it simply does not update anymore. Shame.

Next day update: It didn’t boot the next time and kept finishing in the BSoD. Recovery didn’t work anymore, although I’ve never touched any recovery partition that was there. Windows bootable USB didn’t help (you need to upgrade from your booted system and install didn’t work because there wasn’t free room anymore). Ubuntu 17.10 installed, but booted to black screen – however after enabling Compatibility support module (CSM) in BIOS (F10 to get there), it booted fine.

Bluetooth speaker Bose SoundLink Mini II (5/5)

I wanted a speaker for a party and I wanted it on short notice (not that I needed it, just wanted). I was surprised how many options there is for Bluetooth speaker, but Bose’s one was available and I’d seen it couple of weeks before – and heard it too. Rather pricey just under 200 Eur, but I took the risk.

I’m not sure how much the speaker is responsible for “catching” the bluetooth signal but sometimes I had to keep it pretty close to the device (especially with my HP ProBook). I settled on using a mobile phone. What I love is both way control – I can pause or skip the song using speaker’s button. I also like how it speaks at me and it was easy to pair it with devices.

And then there is the sound. Perhaps too deep to my liking, but definitely satisfying. I expect the speaker to play nicely when quiet – that is I hear enough music details yet I can talk with people around. This is that kind of speaker. Second revision also has micro USB on the speaker itself in case you don’t have the cradle with you (or it gets broken or what) which is nice. Some say it’s not powerful enough, but for me it is more than adequate. No regrets.

Windows package manager Chocolatey (5/5)

I encountered Chocolatey when I was experimenting with Windows unattended installation for VirtualBox. Since then I use it and it’s one of the first things I install on fresh computers. Then I just check my notes about setting up Windows (warning, it’s a live document for personal use, not a blogpost) and copy paste commands to install what I want. This is what good operating system should have at its core – and in a simple matter. Sure you can add/remove things with various PowerShell commands, but the situation there is messy – some features are available only on Enterprise edition so it simply does not do what any Linux package manager does. Chocolatey is a nice plaster for this hole in Windows. Nuff said.

CSS grid layout

Shortly – go and try it. It’s awesome, especially for responsive design when combined with media queries. I first heard about it in this video (highly recommended). While playing with it I also used this tutorial. While still just a candidate recommendation it’s widely supported with recent browsers (although there are some problems with IEs where it originally came from). And you can make it fallback gracefully, just in case. I’m no expert, but this is technology not only for experts – and that’s good.

JavaScript framework ExtJS 4.x (3/5)

I encountered this JavaScript/UI framework on the most recent project and it was selected for us long time ago. It for sure is comprehensive and you can achieve a lot with it – that’s why the verdict went all the way up to 3. I simply can’t deny the results one can do with it. But for us it was too much. It all starts with incredibly steep learning curve. And you pay dearly for anything you don’t understand. Talking about paying – it’s commercial and the license is pretty expensive. It may be good for all-in ExtJS shops, but not for occasional project where people may change and they need to learn it.

Story of our project? We started using Architect, then some “raw JS” devs walked through and original Sencha workspace was unusable and it was coded by hand since then until I came and saw the resulting mess. During the development it produced off-1px glitches like missing right borders of panels until you ran the Sencha CMD (build) that fixes it – but it takes a long time. In general, the whole process is pretty heavyweight and we couldn’t figure out the way how to build the result into completely separate directory tree (e.g. target when Maven exec plugin was used) without breaking some path references.

So you probably can do a lot with it, but it’s all-in or rather not at all. It’s proprietary and I’d rather invest in some mainstream OSS framework instead. That would be better for both my career and future evolution and maintenance of the project too.

Note: Current version is 6.5.x.

Fitness band Xiaomi Mi Band 2 (4/5)

I bought this step counter (as that was the main reason I wanted it) just before previous Christmas and it worked well for the purposes I wanted it for. It counted steps. How precise it is? I personally think it doesn’t matter. For couple of days I carried some Garmin device on the same hand and it counted 12k steps where Mi Band counted 10k. But this doesn’t really matter. On your average day 10k is more than 8k, you know which one is better. On days you do something completely different you can’t compare the number with the day with different routine. But both devices counted actual steps just fine – the difference happens in between the long walks – like during typing on the computer. Mi Band also clearly differentiates between walking and running as it indicates more energy output for the same step count. So far so good, that’s what I wanted.

How good is Mi Band 2 as a “smart band”? I don’t know. I don’t use it to measure my heart rate as it doesn’t work that well while running anyway. I don’t have it constantly paired with a mobile phone either. Only occasionally I pair it with a phone with the application. Ah, the application – it runs only on newer Androids, my old Samsung Galaxy 3S was out of question. It also doesn’t run on tablets in general, neither on PC. That just sucks. I wasn’t able to connect the application and my Xiaomi account either. Instructions are unclear or unavailable or something fails – I haven’t tried since then, I simply use the app on the phone to see the history and don’t care about the bigger picture (yet). I saw recommendations to use it with different app (like Google Fit or so).

Finally – after roughly a half year the device started to fall out of the wristband. Happens to many people so it seems. I was quite lucky I always picked the device up before I bought some cheap replacements. These feel similar – they are rougher for sure but after a while I don’t feel bothered by them at all. I like they feel firmer than the original, but it’s been just 3 months yet so I’ll see whether they last longer or not. But for this price I have no problem to replace them often.

So, why not 5/5? The device was pretty expensive in Slovakia a year ago (50 Eur or so), it got into normal range later. The application is not available on many devices (not running on tablets for arbitrary reasons?!) and it didn’t connect easily (=at all for me) with an online account. Display is good but not in strong sunshine. Otherwise it does what I want – but let’s face it, I don’t want that much from it. 🙂

Amazon Prime (verdict cancelled)

OK, this was rather a hasty affair. Lasted less than an hour for me. I tried to use my account for that – and this is the first gripe. Amazon should seriously help us to decide what account we should (or even could) use for the full benefit in a particular country. Prime in Slovakia is official only for couple of weeks (if I understand it correctly) but I have no idea whether I should use COM, or DE account for it. I tried COM then – trial is for free, shouldn’t hurt.

I’m trying a link from registration email to a movie included with the membership. Does not work in my region. So much for the confusion of the customer. They seriously should employ their AI to offer me available films. There is no chance in understanding all the limitations before you actually start the trial.

Then I tried Amazon Music – as the music is by far the most important thing for me. The site asked for billing information – why? Amazon has it already! Funny that their video site didn’t. But I filled it in… and the wait indicator just keeps spinning and spinning. Hm, after a while I reloaded and retried – the same result. I couldn’t go further without providing the info and they didn’t accept it (or reject for that matter). Terrible first experience. Anyway… as I browsed around it dawned upon me that there is this Music Prime (included) and Music Unlimited – for additional 8 Eur/month.

Is even my kind of music available on Prime? I don’t need a million of songs… couple of thousand of the right ones would be enough. Let’s check. Our newest hit in the family – Steven Wilson! Nope… Pink Floyd? Nope. Yes? Mike Oldfield? U2?! Unlimited only… I couldn’t find anything I liked on Prime!

I could as well cancel the trial period immediately. And I did. (It will run out after a month, but after that the service will be cancelled.) Shame, as I’m rather a fanboy, especially when it comes to what they do with AWS. But first the Music site must actually work, especially at its literally first step and then offer some reasonable music for reasonable price. Total price, not add-on price.

I feel no urge to buy Echo Dot for Christmas either… at least not this year. The whole landscape moves pretty fast anyway and I’m curious where it all is next December.

Converting 96kHz 24-bit FLAC to OGG with ffmpeg

Lately my son Robin asked for Peter Gabriel’s song The Tower That Ate People in a car. I like OGGs, although recently it may have been pointless with MP3 patents being expired. But 15+ years ago it was an obvious choice for me, especially because most encoded MP3 files had also clearly cut out high frequencies and generally lower quality at the same bitrate. Again – not a problem I encountered with newer MP3s. But I stayed true to OGG and I honestly don’t need anything better than its Q7 level.

The song is on Peter’s OVO album but the version Robin likes is from Back to Front show in London. So I browsed it, played it and – all the songs were skipped. Darn! I knew it must be because of the quality being very high because the digital download, companion to the Blu-ray Deluxe Book Edition (yeah, I’m a fan), was in 96kHz for both FLAC and OGG. So I had to recode the OGG, or better FLAC to OGG in normal sample rate (44.1kHz).

FFmpeg for the rescue!

I previously transcoded OGGs to MP3 for a little radio that didn’t support OGGs (I never understand why this happens) and I was very satisfied with FFmpeg because when I can do something from a command line I prefer that. So today I downloaded Windows build of FFmpeg and tried to figure out the switches.

After some Googling I tried -codec:a libvorbis and it told me there is no such a codec. So I tried ffmpeg -codecs to find out what (and if) there is any OGG support. There was just vorbis decoder, so I tried that one. Then ffmpeg told me that it’s just experimental and I must add -strict -2 switch to enable it. It worked afterwards but the warning was strange so I investigated further.

The trouble was that the build from FFmpeg site did not have libvorbis compiled in. Every time you run ffmpeg it prints the configuration it was compiled with and mine didn’t show –enable-libvorbis in the output. It was by an accident I found out I’ve got ffmpeg also on my PATH – which was strange considered I didn’t put the downloaded version there. It was part of ImageMagick which I was pretty sure was installed using Chocolatey (most recommended!), I don’t even remember why. But now it came handy, because, behold, this one had libvorbis with it!

If you have Chocolatey already, just cinst -y imagemagick and then start a new console to find ffmpeg on your path. Or do it the hard way.

Those damn spaces!

I use spaces in the filenames, replacing them with underscores or something does not make much sense, not in 21st century I believe. I respect bash (and prefer it in Windows as well, as delivered by Git) and I consider myself more or less a power-user (more less than more I guess). I understand that long time ago text was the thing and objects were not. But all this white-space escaping is sometimes killing me. Just look at all the effort that went into escaping white-spaces – IFS, quoting, print0, etc.

Actually, using NUL character (print0) as a separator seems most logical but obviously it’s difficult to put it into plain text then. But plain text is so awkward to represent anything anyway (except for the actual text). I believe some richer environment where lists are true lists is the logical thing to have. I’m not hinting on PowerShell necessarily, not sure they have it right, but they tried to be progressive for sure.

When I quote the name containing spaces on the input it’s a single argument (let’s say $1). But when I use ffmpeg -i “$1″… in the script the program complains that the first word from the filename is not a valid name. I encountered this problem many times before, passing the arguments from a command line to the script and there to other commands. Today I learned that “${1}” is different from “$1”. I always used curlies only to separate name of a variable from potentially colliding surrounding. But the first one keeps $1 as a single parameter even for another executable called from a script. Handy. Not intuitive. And definitely not something you learn in this section, for instance.

If this was all more “object-oriented” (broader meaning) it would be a filename, String or even File object from the start all the way to where it should be used. Spaces would not matter.

Sample rate and unexpected “video” stream

Because the source flac file had sampling rate of 96kHz – and I suspected this was the main reason the car audio system didn’t play it – I wanted to resample the audio to more traditional CD quality. That’s what option -ar 44100 does. Because OGG seems to have a single sample format, I didn’t have to care about bringing 24bits down to 16.

But I was surprised that my OGG didn’t play in foobar2000 and loading it actually created two entries in a playlist. I checked the output of a command more carefully and noticed it also converted some JPEG image embedded in that FLAC to a “video” stream. Not interested, thank you, said I – and that’s what -vn (no video) switch does.

And the script is…

Add setting the quality of the output OGG and -y to overwrite the output (I experimented repeatedly, you may not want it, of course) and you get a script like this:


ffmpeg.exe -i "${1}" -ar 44100 -vn -codec:a libvorbis -qscale:a 7 -y "${1%flac}ogg"

It only encodes one file. Last thing I wanted is to treat input arguments for a for loop, although I guess I could have used shift too. Anyway, the command is easy:

find . -name \*.flac -exec ./ {} \;

I guess it doesn’t care about the input format as long as it recognizes it, hence the “anything”. Of course, ffmpeg can do much more – I just wanted to show one recipe, that’s all.