Journal

News, thoughts and other odds & ends

GDPR and Subtle Tweaks

25 May 2018

The General Data Protection Regulation, GDPR, and accompanying revised ePrivacy Regulation, previously known colloquially as ‘the cookie law’ have been very much in the forefront of small business minds recently, and particularly as a web developer I’ve been doing my best to help affected clients navigate the meaning of the regulations in the context of their online presence. 

While I am in no position to offer explicit legal advice, and would not want to do so,  there are plenty of good resources available for small business starting with the ICO Website, and a number of well-informed legal articles about compliance can also be found online. 

The most obvious effect of the legislation that came into force today, as many readers will be aware, has been a deluge of emails asking for re-consent to remain on various mailing lists. The necessity of that will depend on the business of course and how those email addresses have been collected. The assumption by many businesses that they must ask for re-consent regardless however I believe to be wrong (the Guardian picked up on this a couple of days ago also), and also potentially damaging to the businesses in question. I have seen a number of comments on blogs, social media, and in press articles from small business owners complaining how damaging the new regulation is to their business as they’ve now lost most of their mailing list through lack of response to those re-consent emails.

This post is not about whether or not a business needed to ask for re-consent or not, that horse has long ago left the proverbial stable, however GDPR goes beyond keeping people on newsletter mailing lists and in the context of raised public awareness about privacy issues when done right I think it is also an opportunity to build trust in your business through transparency, and does not necessarily have to lead to a catastrophic loss of engagement. It does however mean that it is more important than ever to understand how people interact with your website, and realise that even small changes can make a big difference. I think for many fine-tuning how you handle the requirements of GDPR on your website should not be a one-shot deal but should instead be an iterative process of monitoring and adjustment. I will use a case study to illustrate my point.

One of my e-commerce clients is very proactive with regard to understanding how people use the website, and uses a Customer Relations Management (CRM) tool extensively to understand both how customers are using the products and services on offer, but also with regard to maximising engagement through carefully targeted marketing emails, amongst other techniques, for example. 

One of the events that happens when a customer checks-out on the website is that a couple of days after the purchase an action is triggered in the CRM to send a follow-up / support email to that customer with content specifically related to the products purchased. It is a combination of both post-purchase support and marketing email, and has proven to be highly effective. With the introduction of GDPR one of the things we changed was to change the automatic opt-in to that follow-up email to a soft-optin. While GDPR is primarily organised around the pervading concept of a positive opt-in in this case  because the follow-up email is a first-party email generated directly as a result of the customer entering into a transaction, and being related to that transaction a “soft opt-in” is acceptable (ICO guidance here).

In this case the customer is presented with an opt-out during the checkout in the form of a box to check if they do not wish to receive any follow-up communication. Initially that box was placed on the final page of the checkout at the point the customer commits to the purchase. The accompanying text label said “We like to contact our customers about their order to offer support or related information. If you do not wish to receive such communications please check this box.”

The checkbox happened to be on the same page as a box to check confirming adherence to Terms and Conditions of sale at the very last step of the checkout. It is never a great idea to include two differing checkbox paradigms in the same place e.g. where the preference would be for one box to be ticked and the other to remain unticked, so the two boxes were separated on the page to try and minimise instances where people might just tick a box without really reading the label. 

Now, when a purchase is completed a flag is sent to the CRM indicating whether or not that customer has opted out of the follow-up email sequence so we were able to understand very quickly how many people were opting out of the follow up emails. Within the first couple of days the ratio of people opting out to those not opting out was quite high, around 35%. The time period was too small if looking to get a statistically meaningful sample however it was good enough, and higher than we wanted so a small change was made to see if we could improve that. 

The opt-out checkbox was moved to the same early point in the checkout where a customer provides their email address and wrapped into a new section called “Communication Preferences” along with the general email newsletter opt-in check box. The label was also subtly changed to put more emphasis on the post-purchase support aspect of the email. A tiny change that took about 10 minutes to make but that has had a significant impact on the number of people choosing to opt-out of the follow up marketing emails, the new percentage of people opting out having dropped to just 9% in a couple of days. Over the long term that is a very significant change.

I’m no psychologist so I do not know for sure why this small change has had such a big impact. Perhaps it is because at the final commitment stage of a checkout the customer is already being asked to make a commitment decision - i.e the accept the T’s & Cs and commit to the spend - and asking for another decision/commitment at that point is too much. I do not know but what it does illustrate is how seemingly tiny changes can make a big difference to how customers interact with your website

Making the understanding of your visitor behaviour a process of continuous monitoring,  experimentation and improvement has the potential to bring significant reward. By way of another small example, a couple of years ago I changed the operation of the shopping cart on the site. The summary in the header used to show a cart total, number of items and, on hover, the items in the cart - in common with the majority of e-commerce sites. I changed it such that each time the customer added a product to the cart, while still on the product page, it showed a popup of the cart contents and total spend without leaving the product page. I’m not sure what the psychological effect was but the overall trend immediately afterwards was for increased order sizes. Perhaps something to do with being reminded of the state of the cart at each point keeping the customer away from the cart page itself - from which it can be hard to entice someone back into more shopping.

By way of a summary then  - if you are concerned about the impact of complying with GDPR on your online business then it pays to be pro-active and experiment a little with regard to how you achieve that compliance. For the broader picture, just because your website has been delivered and is up and running, don’t assume that you don’t need to do anything more to it or that you need to spend a lot of money to have a significant influence on your visitors.

As for me.. I’m pretty much the last in the queue as far as updating my own website is concerned, but contrary to popular belief GDPR is not a witch hunt against small businesses trying to do their best with regard to compliance. Compliance is about more than just sticking an updated policy on your site. My business is compliant, I just haven’t had time to update my privacy policy yet. It is a job for the long weekend. My portfolio is also very out of date, one of the consequences of being in demand being that I have not yet had time to add all the really cool projects I’ve been busy with over the last year and a bit. Do check back soon!

Quick CKEditor Pull Quote Plugin

02 Mar 2018

I use CKEditor 4 a lot as a rich text editor instance in my projects. In particular the widget API is brilliant for allowing me to write widgets that allow admin users of my sites the freedom to create all sorts of flexible, rich layouts inside their editor instances without having to ask me to build any specific templates. I've written a few in particular that have a bunch of user-configurable options, via dialog box, for creating some really interesting content specific to various projects, Arteye in particular used a few, and Elly Jahnz made some nice scrapbook-style layouts using them. I'll share some of those at a later date. For now though, as time is short, I just wanted to share this simple one having just made it for use by Elementum Journal. It's a very straightforward widget, with no user configurable options, for inserting a pull quote into the flow of an article. 

It has no dependencies other than the CKEditor 4 widget plugin. You can download a zip file for the widget package itself here with the toolbar icon. Drop it into your CKEditor plugins folder and include, at a minimum, the following into the config.js file:

config.extraPlugins = 'widget,pullquote';

 

..and download the required widget plugin here.

The plugin itself is dead simple - defines a button for the toolbar that inserts an <aside>  that can be used inside an <article> and subsequently styled however you like.

E-commerce product migration with DOMDocument()

16 Jan 2018

When building a new e-commerce site to replace something pre-existing I usually try and get hold of a copy of the database in use such that I can save the client significant set-up time by simply mapping products, categories and so on across to the new database. If the agency in question is generally helpful that is rarely a problem and thus my preferred solution. Sometimes however the client has had a poor experience with their current agency, and/or the agency in question is simply unhelpful, and on occasion deliberately obstructive. Sadly it does happen, and I have even come across cases where the current agency have suspected that the client will be going elsewhere and simply taken the existing site offline with no warning.  I've been working around one such situation recently in which the client was potentially faced with an extended workload of having to recreate many thousands or products in the new site. Not ideal. With no access to the database a different approach was required.... which is where PHP's Document Object Model (DOM) comes in.

The PHP DOM provides a very handy API for operating on structured XML/HTML documents, and given that the product pages on the existing site all used a common template with identifiable nodes for the various key product parameters theirein lay the solution to saving the client hundreds of hours of tedious effort.  This case is one that will have commonality with a number of development situations so I figured I would share my solution here such that you can take from it what you will.

I'd already culled the product category structure from the existing site so what follows deals with the products themselves together with images, and any assignments to those categories.

My e-commerce platform is built upon the Codeigniter 3 framework so the scripts are presented in the context of a CI controller that is part of the build in question but of course it is easily adapted to any other context and really is just a bit of precedural code. it was just easier when it came to doing all the necessary stuff to map the harvested data into the local site under development.

The solution assumes that the site being examined has a proper XML sitemap. If it doesn't then some sort of recursive function starting from the category menu would be a good place to start in terms of harvesting all the site URLs.

I haven't really included anything way of error handling since this controller only gets called manually by me, I'm interested only in its utility but it could easily be turned into a tool with a nice user-interface and so on.

It all worked well and in the matter of a few minutes successfully recreated thousands of products in the local site. Huge timesaver.

While you're at it you can use the same approach to write all all the 301 redirects you'll need to map all the old product URLs to the new ones tready for when the new site goes live.

 

1. The basic controller + index function.


class Get_products extends Site_Controller 
{

   private $baseurl = 'http://www.somesite.com' //the baseurl of the site being examined


    function index()
    {
        ini_set('memory_limit', '-1');
        set_time_limit(0);

        $sitemap = $this->baseurl.'/sitemap.xml';

        //get all the urls
        $urls = $this->parseSitemap($sitemap);

        if(!empty($urls)) {

            echo 'Processing '.count($urls).' urls...';

            $n = 0;

            foreach ($urls as $url) {
                if($this->parseProduct($url))
                {
                    $n++;
                } 
            }
            echo $n.' products were successfully processed.';
        }
        else {
            echo 'No urls were found';
        }
       return;
    }



}

 

2. Parse the Sitemap

    function parseSitemap($sitemap)
    {
        /** This function simply gets all the URLs in the sitemap. Assuming the sitemap is structured correctly URLs are wrapped by the loc tag.
        *    In this case all product urls contain the string 'shop' in the URL so am ignoring any that doen't.
        *  It's not critical since ultimately the product page structure is used to determine if the URL is a product or not, but this just saves a bit of overhead.
        */
        
        $urls = array(); 
        $DomDocument = new DOMDocument(); 
        $DomDocument->preserveWhiteSpace = false;
        $DomDocument->load($sitemap); 
        $DomNodeList = $DomDocument->getElementsByTagName('loc'); 

        foreach($DomNodeList as $url) { 
            if(stripos($url->nodeValue, '_shop') !== false) {
                $urls[] = $url->nodeValue; 
            }
        }
        return $urls;
    }

 

3. Parse the product

This function does the work of picking through a retrieved product page. It includes calls to a number of helper functions which are reproduced with explanations below this one.

function parseProduct($url)
    {

        $html = $this->fetch_html($url);

        $dom = new DOMDocument();

        libxml_use_internal_errors(true); //if HTML 5 then lack of a DTD will cause errors on load, this will supress those.

        @$dom->loadHTML($html);
        
        libxml_clear_errors();
        
        $dom->preserveWhiteSpace = false;

        /**
         * In this case the product page structure uses an h1 tag for the product title.
         *     If no title is found then ignore the URL as it's not a product.
         *  Call to helper function elementByClass() to search the DOM for the appropriate element 
         */

        $className = 'product-title';
        $tagName = 'h1';
        $element = $this->elementByClass($dom, $tagName, $className);

        if($element !== false) {
            
            $productTitle = $element->nodeValue;

            /**
             * Subsequent product parameters can be discovered using the same method based on tag and class.
             * I'd already retrieved all the category names in use by the site so grabbing the product category assignment also so I can set up categories.
             * In this case the existing site had a 1 to 1 relationship between products and categories. If dealing with a one to many then if the sitemap has unique URLs
             * then simply look for duplicate products in the function saveProduct() and do category assignments as appropriate (assuming your new site can handle a one to many relationship).
             */

            // Look for a category name

            $className = 'detailProductCat';
            $tagName = 'div';
            $element = $this->elementByClass($dom, $tagName, $className);

            if($element !== false) {
                $productCategory = $element->nodeValue;
            }
            else {
                $productCategory = null;
            }

            // And for a product description

            $className = 'detailProductDesc';
            $tagName = 'div';
            $element = $this->elementByClass($dom, $tagName, $className);

            if($element !== false) {
                $productDescription = strip_empty_paras($this->innerHTML($element));
            }
            else {
                $productDescription = null;
            }

            // Now find a price.. in this case the site being analyzed didn't permit different prices for various options on a given product.

            $element = $dom->getElementsByTagName('h2')[0];
            if($element !== false) {
                $productPrice = preg_replace('/[^0-9.]/','',$element->nodeValue);
            }
            else {
                $productPrice = null;
            }

            /** Product Options
            * The site being analyzed used a  to present different variations of a given product.
            * So if the product has options find those by finding the select and iterating over the select options.
            * If no  found then it must be a single product with no choices.
            */

            $className = 'cartDdlOptions';
            $tagName = 'select';
            $element = $this->elementByClass($dom, $tagName, $className);

            
            $productOptions = array();
            if($element !== false) {
                $options = $element->getElementsByTagName('option');
                foreach ($options as $option) {
                    $productOptions[] = $option->nodeValue;
                }
            }
            
            /** PRODUCT IMAGES use the same philosophy. In this case the site used a carousel plugin so it was easy to identify the appropriate classname.
            * Images are copied to a local directory for later use.
            * In this case the source site generated image srcs dynamically so typically an image source could look like "/_loadimage.aspx?ID=172236"
            * so the following includes a call to a function that looks in the headers sent to determine the image type to save as.
            */
    

            $className = 'cycle-slide';
            $tagName = 'div';
            $element = $this->elementByClass($dom, $tagName, $className);
            $imagePaths = array();

            if($element !== false) {
                $images = $element->getElementsByTagName('img');
                $i = 0;
                $savePath = 'imagesTemp/';

                foreach ($images as $image) {
                    $src = $this->baseurl.$image->getAttribute('src');

                    //get the file contents

                    $imageString = file_get_contents($src);  

                    if($imageString !== false) {
                        //and work out the file type. Only interested in jpg, gif, or png in this case.

                        $type = $this->find_file_type($src);

                        if($type == 'gif' || $type == 'jpg' || $type == 'jpeg' || $type == 'png') {
                            $ext = str_replace('e', '', $type); //I know jpeg is a valid extension but I don't like it...

                            //save the file with a nice, SEO friendly filename. Codeigniter has a handy helper function, url_title(), that does a good job of cleaning up strings for URLs.

                            $save = file_put_contents($savePath.url_title($productTitle).'-'.$i.'.'.$ext,$imageString);
                            if($save !== false) {
                                $imagePaths[] = $savePath.url_title($productTitle).'-'.$i.'.jpg';
                                $i++;
                            }
                        }
                    }
                }
            }

            $product = array(
                'productTitle' => $productTitle,
                'productCategory' => $productCategory,
                'productDescription' => $productDescription,
                'productPrice' => $productPrice,
                'productOptions' => $productOptions,
                'imagePaths' => $imagePaths
            );

            // Pass the product data to the saveProduct function that does whatever your own e-commerce platform needs in terms of database and file structure.
            return $this->saveProduct($product);
            
        }
        
        return false;

    }

4. Get the HTML

Simple cURL request to fetch the HTML for a given URL

function fetch_html($url)
    {
        

        $ch = curl_init();
        $timeout = 5;
        curl_setopt($ch, CURLOPT_URL, $url);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
        curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
        $html = curl_exec($ch);
        curl_close($ch);

        return $html;
    }

 

5. Find elements by class

The site being examined used specific classes to identify key areas of markup in the product template. We need to grab those to get to the product parameters. PHP's DOMDocument doesn't include a direct means of accessing nodes by classname so this function takes care of that.

function elementByClass(&$domParent, $tagName, $className)
    {
        /** PHPs DOMDocument() class doesn't include a direct means of identifying nodes by classname.
        * But you can iterate over childnodes looking for the appropriate class attribute
        * I only want the first instance but have structured the function to provide an array of nodes should it be needed
        */ 

        $nodes = array();

        $childNodes = $domParent->getElementsByTagName($tagName);
        $tagCount = 0;

        foreach ($childNodes as $node) {
            if (stripos($node->getAttribute('class'), $className) !== FALSE) {
                $nodes[] = $node;

                //you could just do this is always wanting only the first node.
                //return $node  
            }
        }
           
           //in this case I just want the first

           if(!empty($nodes[0]))  {
               return $nodes[0];
           }
           else {
               return false;
           }
           
    }

 

6. Inner HTML

The product description exists over multiple paragraphs with tags I was keen to preserve so this helper function does just that.

function innerHTML( $parentNode )
    {
        /* Neat helper function extracts the inner HTML of a DOM node
        * credit to https://kuttler.eu/en/post/php-innerhtml/  for saving me time
         */

        $innerHTML = '';
        $elements = $parentNode->childNodes;

        foreach( $elements as $element ) { 
            if ( $element->nodeType == XML_TEXT_NODE ) {
                $text = $element->nodeValue;
                $innerHTML .= $text;
            }     
            elseif ( $element->nodeType == XML_COMMENT_NODE ) {
                $innerHTML .= '';
            }     
            else {
                $innerHTML .= '<';
                $innerHTML .= $element->nodeName;
                if ( $element->hasAttributes() ) { 
                    $attributes = $element->attributes;
                    foreach ( $attributes as $attribute )
                        $innerHTML .= " {$attribute->nodeName}='{$attribute->nodeValue}'" ;
                }     
                $innerHTML .= '>';
                $innerHTML .= $this->innerHTML( $element );
                $innerHTML .= "nodeName}>";
            }     
        }     
        return $innerHTML;
    }

 

7.  Image file types

Browsers use the content-type header rather than file extension to determine the type of image file being served. In this case because the site under examination served image data dynamically it's necessary to know what the image type is such that the image can be copied and saved correctly. This function does a simple examination of the headers served from the image src.

function find_file_type($image_src)
    {
        /* browsers use the content-type header to understand if something is an image and what kind it is.
        * this function simply uses PHPs built-in get_headers() to get the headers returned at the image src url and returns the type if it's an image.
        */

        $type = null;

        $headers = get_headers($image_src);
        
        if(!empty($headers)) {
            foreach ($headers as $h) {
                //just looking for an "image/*" string
                if(strpos($h, 'image/') !== FALSE)
                {
                    $dat = array();
                    //extract the type substring
                    preg_match("/image\/(.+?);/", $h, $dat);
                    if(!empty($dat[1])) {
                        return $dat[1];
                    }
                }
            }
        }

        return false;
    }

 

8. Save Product

Just whatever you need to do here...

function saveProduct($product)
    {

        /* function contains whatever you need to do to create the product  in the context of your own site
        * In my case various database operations around products, product options, category assignments, and setting up the file structure for the product images.
        * for the record my e-commerce platform maintains product images in separate folders for each product, it makes user management of them much simpler than having a single repository with thousands of pictures.
        .
        .
        .
        .
        */

    }

 

 

MySQL conditional composite join with a subquery (Ecommerce, Sage Accounts Import)

05 Jan 2018

I'm going to try and make the effort to be a bit more forthcoming with useful dev stuff.. stuff that isn't necessarily obvious that I come across from time to time. So to kick that off here's a little tidbit I needed to figure out just now. My e-commerce platform uses separate tables for products, product variations (eg large, small, black, red.. whatever) and product stock. I won't reproduce the tables in full here but essentially consider the 'products' table as a list of primary, or parent products. The 'product_options' table contains all the children, if applicable, of those products with their own SKUs, prices and so on.
Product stock is maintained in a separate table for various reasons, that contains fields for the product ID, option ID (if applicable), current stock level, and fields for tracking stock movement.

In building a tool for importing product stock data from Sage Accounts I needed a query that would give me a single flat array of products with stock levels. The join would be conditional on whether or not a product had child products, and if it did the join would work on composite fields (i.e product ID and option ID). Now there are many ways of skinning the proverbial SQL cat but this is how I did it.. a subquery to get a flat list of products/product variations with the JOIN  condition to the stock table inside a CASE statement.

I have not benchmarked it for performance as I don't really care, the query is run once as an admin task during the parsing and error checking of an imported CSV file that in this case runs to in excess of 20,000 records, in that context a few milliseconds either way is not a worry.

 

SELECT a.*,b.stock 
FROM 
(
    SELECT p.product_id, p.name, p.price,o.option_name, p.sku, o.option_id, o.sku AS option_sku, o.option_price
    FROM products p 
    LEFT JOIN product_options o ON p.product_id = o.parent_id
    WHERE p.deleted = 0 AND (o.deleted = 0 OR o.deleted IS NULL) 
) AS a 
LEFT JOIN product_stock b ON (CASE WHEN a.option_id IS NULL  
                                   THEN b.product_id = a.product_id
                                   ELSE (b.product_id = a.product_id AND a.option_id = b.option_id)
                               END)
ORDER BY a.sku ASC;

 

There. Might come in handy if you have a similar problem especially if you don't necessarily always find SQL syntax completely intuitive.

 

I would do well...

22 Dec 2017

When people ask me if a blog or news feed is a good idea on their site I always say something along the lines of "..only if you can be bothered to keep it up to date. A blog with only old posts sends the wrong message, better not to have one at all if you won't keep it alive with fresh content...". I would do well to listen to my own advice.. where the almost-a-year went since my last post I have no idea. It was a phenomenally busy year and good blog intentions rather fell by the wayside. It was a year flavoured by a wide variety of applications from a booking platform for a surf school, through some interesting e-commerce briefs, to a large scale employment agency management application (ongoing). Having decided to take a proper break from my keyboard over the holiday I'll make timeearly in the new year to bring my portfolio up to date with new work. My brain is sludge at the time of writing and wants to do very little more than get out on the cliffs for a hike, ride a bike, and spend time with friends. Hasta 2018!

A Painted Roads Story

21 Jan 2017

This is sort of covered in my portfolio but actually it makes rather a nice, in my opinion, story for a blog post. It is about web development but it also has to do with how interesting and diverse life can be if you let it. In the spring of 2010 I was cycling north through the baking hot, barren deserts of northern Argentina. As a consequence of being there I made the acquaintance of another cyclist.. an English chap, David, who had made his home in southeast Asia. The reasons for our meeting are as far from anything to do with web development as you can imagine.. I was pondering a choice of route either over the high (4700m) Paso San Francisco into the Atacama desert of Chile and hence north into Bolivia that way, or to continue north in Argentina and cross the border into Bolivia at the La Quiaca - Villazon border post. In the event I decided I was keen to enjoy a few days off and some cold beers in Salta so I kept going north in Argentina. It was super... I digress however... through subsequent conversations I discovered that David had been working as a guide for a large cycling holiday operator and had become somewhat disillusioned with the way those tours were run, and instead was keen to start a business of his own that would allow him to run some thoughtfully designed, small group tours exploring the lesser-touristed backroads of his favourite part of the world - Southeast Asia. All that was missing was something of a catalyst.. and our meeting turned out to be that catalyst. 

On my return to the UK we put our heads together and made a business. After much head-scratching we called it Painted Roads Cycling... it seemed appropriate.. "colourful cycling tours". I got my paintbrush out and scribbled a rough logo that has endured to this day, put together a business plan, a simple website to get things off the ground, and some brochures... and that was it.. it just happened and all worked out brilliantly. Having helped to get the business off the ground I took a step back to pursue the business that is mikesimagination.net, and now, 6 years later Painted Roads has matured into a rather lovely little business with a reputation for personal service and a portfolio of really special tours.

Back in the summer of 2016 David asked me if I could build him a new website that could support a whole bunch of features he wanted and that would give his customers, both new and loyal existing customers, the best possible experience and that could differentiate him from the large, corporate cycling tour operators. So I did. It's here: http://www.paintedroads.com It's built using my modular CMS and hence can integrate new features and functionality effortlessly. I've just finished building a new module for creating promotions with landing pages and that can issue user-specific discount vouchers for different campaigns. It didn't take long and hence has worked out stacks cheaper than using a subscription service for creating promo landing pages as many businesses do. The CMS is also highly optimised for search engines so despite Painted Roads being only a very small business in Google searches we've been able to get it to rank alongside the large, industry leading tour operators with very little effort for a range of key queries.

Enough rambling from me.. I thought I would write it up as one of the things I enjoy most about what I do these days is the sheer diversity of businesses and individuals I've been able to develop long term relationships with, and to be in a position to use my skills to make a real difference to those businesses.

 

As the owner of Painted Roads Cycling I am delighted with Mike's work. The new website is not only beautiful from the outside, it's simple and efficient behind the scenes, and Mike is always happy to go the extra mile to make things work just so. And the real beauty is, my input was minimal, all I needed say was "time for a new website Mike", and this is what appeared!

- David Walker, Painted Roads Cycling

 

As a footnote I still want to go back and ride the Paso San Franciso. Perhaps in 2018.. if I do it'll be over on my personal blog at http://www.seasurfdirt.com 

Showing 1 to 6 of 20 |  1 2 3 >  Last ›