What I hate about Pulseaudio is its crappy default configuration. After fresh installation of SUSE Leap 42.2 I encountered an issue that subwoofer didn’t work with Amarok… again((( Lucky me, the first link that google provided me with was link to my own(!!!) thread that I started about 6 years ago (sick!!!). In 2011 I encountered the same issue, asked for help, but solved on my own and answered for my own request. And now future me was able to find that long forgotten thread of mine. That’s so sweet to recieve a messege to miself )))

This time I will put the answer here to remember it better 😉

The problem solved, tnx to this post.
In /et/pulse/daemon.conf I’ve made following changes:

enable-lfe-remixing = yes
default-channel-map = front-left,front-right,rear-left,rear-right,front-center,lfe

Python, R, Qt, peewee, bokeh, pandas, SQLite plus couple of sleepless nights and here you are a cute app for the environmental monitoring needs )))

Main window of the application

I wanted to have UUID fields in my Peewee-bsead models for the SQLite database. I quickly found ready-to-use code, but it lacked one important thing – automatic uuid generation. Here is the solution:

import uuid
from peewee import Field

class UIDField(Field):
    db_field = 'uid'

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.default = uuid.uuid4

    def db_value(self, value):
        return str(value)  # convert UUID to str

    def python_value(self, value):
        return uuid.UUID(value)  # convert str to UUID


A satisfied customer

Posted: November 9, 2015 in GIS, This and That
Tags: ,

Recently I did a QGIS scripting job and here is the feedback from an extremely satisfied customer:

It was fantastic to work with Yury. He provided an excellent product, that went far beyond what I was expecting. I will certainly be contacting Yury in the future for any jobs that relate to GIS and python scripting. Communication was excellent and his ability to understand the job requirement was very impressive. A+++

Guys, if you are in need of geoprocessing tool for your project – don’t hesitate to contact me 😉

My QGIS Processing Scripts at GitHub

Posted: October 23, 2015 in GIS
Tags: , , ,

This is probably my shortest post ever.

All my QGIS processing scripts (R and Python) and models that I already blogged about, plus some extra are now available at GitHub.

A Quick Map With QGIS and OSM

Posted: June 18, 2015 in GIS
Tags: , , ,

What I love about QGIS is that one is able to create a nice map quickly. The other day I was asked to make a situation map for the project we are working on to include it into presentation. Аll I had was a laptop with no relevant spatial data at all, but with QGIS installed (I even had no mouse to draw something). Though it was more than enough: I loaded OSM as a base layer and used annotation tool to add more sense to it. Voilà:

Posted: June 11, 2015 in GIS
Tags: , , , ,
This is strange, but I was unable to find instruction about importing QGIS layers into PostGIS database with PyQGIS API. The PyQGIS cookbook has the example of exporting layers as .shp-files via QgsVectorFileWriter.writeAsVectorFormat() function and says that the other other OGR-supported formats are available to use in this function as well. PostGIS is supported by OGR so people got confused and try to use this function to import data to the PostGIS with no success and have to write generic import functions. 

After of couple of hours of searching the internet for the solution I gave up and decided to find the answer the hard way: started to search through the source code for the DB Manager plugin that has this nice “import layer” feature. It took about 20 minuets to trace down the function that was in charge of the import and it was QgsVectorLayerImport.importLayer(). But the quest wasn’t over yet! The documentation says nothing about provider names that are accepted by this function. “PostgreSQL” would be the obvious name for the provider as it is the name for the PostgeSQL provider in OGR, but it is not the case. I had to go through the source code of DB Manager again and luckily in comments (and there are quite a few of them in DB Manager: I didn’t find a single doc-string there) the author wrote that it is “postgres” for the PostgreSQL.

Now here is the very basic example code of importing QGIS layer into PostGIS:

uri = "dbname='test' host=localhost port=5432 user='user' password='password' key=gid type=POINT table=\"public\".\"test\" (geom) sql="
crs = None
# layer - QGIS vector layer
error = QgsVectorLayerImport.importLayer(layer, uri, "postgres", crs, False, False)
if error[0] != 0:
    iface.messageBar().pushMessage(u'Error', error[1], QgsMessageBar.CRITICAL, 5)

One of the optional input for the creation of the classification models in OrfeoToolbox is XML image statistic file that is produced by Compute Image Second Order Statistics tool. If you opt to calculate this statistics – necessarily check the created XML file. If you will see “#INF” record in band statistics (i.e. “1.#INF”) – replace “#INF” with something like “0e+32” (“1.#INF” -> “1.0e+32”), or (a better solution) – calculate the statistics for this problematic band independently (probably with another tool) and completely replace the value.

If you leave “#INF” record in XML file, models for classification that will be created based on it will have “#IND” records and when you will try to run classification based upon these models – the process will be terminated complaining about unrecognised “#IND”-values, because values must be numeric and not characters.

I don’t know what causes the “#INF” records to appear in the first place, maybe not properly defined no-data values? In some cased it is impossible to provide correct no-data value due to limitations of the input field (I use OTB from within Processing module for QGIS).

This was one of discussion questions of the Disasters and Ecosystems MOOC.

Actually the answer is simple. The formula for successful environmental degradation consists of 2 variables – overpopulation and capitalism.

When there are a lot of people – most of them a poor, uneducated and hungry. When you are hungry you will do everything to become less hungry today even if it can potentially lead to negative consequences tomorrow, which you may not even foresee if you are uneducated.

Humans are good in adaptation. When the adaptation is strong enough it leads to abuse (for example, if you are well adopted at the stock market you start abusing it to increase your profit even if it will cost dearly to the other stakeholders – people value their own well-being much more than the other’s and of course much more than the well-being of environment especially when they know that their own impact seems negligible compared to impact of the entire population).  When you live in condition of free market of capitalistic world – you are your only hope for not being hungry (or being more wealthy) now. And as you know from the economic theory – the capitalist economy needs a constant grows of consumption and production – so you need more and more resources to just sustain the economy. In conditions of capitalist market people value today’s profit much more than losses of tomorrow

You see – the capitalist economy needs people to consume more and more; more people – more consumption; more people – more poverty and lack of education; more hungry uneducated people people – more people willing to do anything to survive now and don’t even bother themselves about the future.

Overpopulation and a consumption society (created by capitalist economy) inter-stimulate each other and destroy the environment for the today’s profits or food and doesn’t care much of the consequences of tomorrow because most are either uneducated or doesn’t care at all plus you have to live through today to face consequences of your actions tomorrow (a day-by-day living).

Obviously there are 3 steps to improve the situation:

  • Decrease the population.
  • Educate people.
  • Create new sustainable economy model that would equally value tomorrow’s losses and today’s profits, and would not rely on constantly increasing consumption.

Pan-sharpening Using R

Posted: February 8, 2015 in GIS, Spatial data
Tags: , , ,

In my previous post I described how to perform pan-sharpening using OrfeoToolbox and QGIS. This time I will show you how to do this in R. At the bottom you will find several functions I wrote on top of the ‘raster’ package that allow a convenient pan-sharpening in R.


You may wonder why I even bothered myself with pan-sharpening in R when I already have a nice model for pan-sharpening in QGIS. See, one can’t control the data-type of the imagery returned by pan-sharpening that involves OTB. This leads to some unpleasant consequences: during pan-sharpening one will get floating point pixel values even if in initial values were integers. So for example a 600 MiB multi-spectral imagery (with integer pixel values) after pan-sharpening will grow to 5.2 GB. But if we will change datatype of the resulting imagery to force it store only integers it size will be reduced from 5.2 to 2.8 GB which is a huge difference. ‘raster’ package in R allows to control output datatype. Plus using R you can play with different filtering options to play with.

The Theory

In OTB pan-sharpening is performed using following well-known formula:

Where and are pixels indices, PAN is the panchromatic image, XS is the multi-spectral image and PANsmooth is the panchromatic image smoothed with a kernel to fit the multi-spectral image scale.

We will implement exact the same approach using ‘raster’ package for R.

Code Usage and Result

As pan-sharpening is the type of procedure that will reoccur over some time I decided to write generic functions for pan-sharpening itself and for saving the results to have easier time in future.

The usage is as simple as:

pan <- raster('pan.tif')
multi <- brick('multi.tif')

pansharp <- processingPansharp(pan, multi)

output_path <- 'r_pansharp' # includes filename but not the extention
saveResult(pansharp, output_path)


Here you are the example results from the script and from the OTB model for one of the illegal landfills in Russia:

Initial multi-band raster
Initial multi-band raster


Initial panchromatic raster
Initial panchromatic raster


Result of pan-sharpening using R script
Result of pan-sharpening using R script


Result of pan-sharpening using OTB

Result of pan-sharpening using OTB

Which output do you like better: from OTB or R? Comparing both output results you can notice that the output from R bears heavier filtering markings than the one from OTB. On the other hand R output has more hues of the green colours which actually helps in distinguishing different types of vegetation. As you will see in the code – one can easily adjust or modify procedure of filtering panchromatic raster (extractLPF() function) in order to get desired output.

The code



# Create needed functions -------------------------------------------------

pansharpFun <- function(raster){
    This function pansharpens a raster
    # @param raster - Raster object with 3 bands (to-be-pansharpened, high-res and low-frequency component of the high-res image)
    # @param band - band numver, integer
    # @return pansharpened_raster - pansharpened Raster object
    # pansharp = Lowres * Highres / LPF[Highres]
    pansharpened_raster <- (raster[,1] * raster[,2]) / raster[,3]

extractLPF <- function(pan, multi, filter = 'auto', fun = mean) {
    Returns a low-frequency component of the high-resolution raster by the
        filter adjusted to the low-resolution raster
    # @param pan - a high-resolution panchromatic raster - Raster object
    # @param multi - low-resolution raster to be pansharpened - Raster object
    # @param filter - a smoothing wondow - matrix
    # @param fun - a function to process filter (part of the focal() function)
    # @return LPF - a low-frequency component of the high-resolution raster - Raster object
    # Adjust filter size 
    if (filter == 'auto') {
        pan_res <- res(pan) # (x, y) resolution of the panchromatic raster in CRS units (?)
        multi_res <- res(multi) # (x, y) resolution of the lowres raster in CRS units (?)
        x_res_ratio <- round(multi_res[1]/pan_res[1])
        y_res_ratio <- round(multi_res[2]/pan_res[2])
        total <- x_res_ratio + y_res_ratio
        filter <- matrix(1, nc = x_res_ratio, nr = y_res_ratio)
        # Enshure that the matrix has an uneven number of colums and rows (needed by focal())
        if (nrow(filter)%%2 == 0) {
            filter <- rbind(filter, 0)
        if (ncol(filter)%%2 == 0) {
            filter <- cbind(filter, 0)
    LPF <- focal(pan, w = filter, fun = fun) # low-frequency component

processingPansharp <- function(pan, multi, filter = 'auto', fun = mean){
    Pansharpening routine
    # @param pan - a high-resolution panchromatic raster - Raster object
    # @param multi - low-resolution raster to be pansharpened - Raster object
    # @param filter - a smoothing wondow - matrix
    # @param fun - a function to process filter (part of the focal() function)
    # @return pansharp - pansharpened 'multi' raster - Raster object

    # Check if input parameters are valid - we can loose a lot of time if some of the inputs is wrong
    LPF <- extractLPF(pan, multi, filter, fun)
    multi <- resample(multi, pan) # resample low-resolution image to match high-res one
    all <- stack(multi, pan, LPF)
    bands <- nbands(multi)
    pan_band <- bands + 1
    lpf_band <- bands + 2
    # Pansharpen layers from low-resolution raster one by one
    pansharp_bands <- list()
    for (band in 1:bands) {
        subset <- all[[c(band, pan_band, lpf_band)]]
        raster <- calc(subset, pansharpFun)
        pansharp_bands[[band]] <- raster
    pansharp <- stack(pansharp_bands)

saveResult <- function(raster, path, format = 'GTiff', datatype = 'INT2S'){
    Saves Raster object to location
    # @param raster - raster to be saved - Raser object
    # @param path - path including filename without extention - string
    # @param format - format of the output raster accordingly to writeRaster() function - string
    # @param datatype - datatype of the raster accordingly to writeRaster() - string
                format = format, 
                datatype = datatype, 
                overwrite = T)

# Do pansharpening --------------------------------------------------------

pan <- raster('pan.tif')
multi <- brick('multi.tif')

pansharp <- processingPansharp(pan, multi)

output_path <- 'r_pansharp' # includes filename but not the extention
saveResult(pansharp, output_path)