Setting Up a Local Blockchain with Ganache

Blockchain graphic

Why would I want to do this?

Interacting with Blockchains and blockchain technology probably seems for most people like a very complex task. How to even get started? Don’t they run on some servers spread across the globe? How would I make a transaction and see the result? Wouldn’t I use to use real money to do this?

If you want to play around with using Blockchain technology but don’t know how to get started a great way is to run a local test blockchain on your own computer. It’s easy to set up, carries no risk of losing your own money, gives you immediate insight into what’s happening and can just be reset at any moment so you can try over and over again.

We’ll use two applications to get started:

  • Ganache
  • MetaMask

From start to finish it should take no longer than 30 minutes βŒ›

Ganache – A Personal Blockchain

The absolute easiest way to get started is by using Ganache. Ganache is a personal Ethereum blockchain running on you own computer that’s incrediby easy to install and get running. It’s basically a virtualized blockchain application.

It’s available for Windows, Mac and Linux, just download the installer, double click to install and run. Takes 5 minutes to get started.

MetaMask

Once Ganache is installed you need a way to interact with the blockchain. There are many applications available to do this but the easiest is probably MetaMask. It’s a browser extension that supports Chrome, Brave, Firefox and Opera plus has iOS and Android apps in beta. Follow the directions on the site to install and create an account.

We will use MetaMask to connect to our local blockchain server so we can add accounts and send test transactions between the accounts. These transactions we will then be able to see in Ganache.

To install MetaMask get it from the Chrome Web Store and follow the instructions to create an account.

Connect MetaMask to Ganache

Assuming you have now installed Ganache and MetaMask we need to connect the applications. First run Ganache and select the Quickstart option. This uses the default settings and get us up and running.

Starting Ganache
Ganache starting view

Ganache will now create a test blockchain and some test accounts which by default have 100 ETH (test ETH of course) in each. You can see the accounts below along with their public addresses, balance and transaction counts. That’s all there is to getting the test blockchain 🀜

Ganache application
Gamache accounts view

Now that Ganache is running we need to connect it to MetaMask. Open MetaMask and log into your account.

MetaMask login

To make working in MetaMask easier you can click on the more menu and choose ‘Expand View‘ to open it full screen.

Expand to fullcreen

To connect MetaMask to our local blockchain we need to change a few setting in MetaMask. First click on the network name at the top and select ‘Custom RPC

Change MetaMask network

Here we add the details for our local blockchain. If you look in the header of Ganache you can see the server details we will use.

Ganache RPC server settings

Call the network anything you want, the URL must be http://127.0.0.1:7545 as Ganache is running on port 7545 on localhost and leave the rest blank.

Once you click save you are now connected to the Ganache blockchain, although right now there’s not much to see. To really see what’s going on we need to add accounts to MetaMask.

Adding Accounts

Returning to Ganache choose one of the accounts to add and click on the key symbol. This will allow us to see the private key of the account. Obviously this is not something you would normally be able to do since private keys are, by their nature, private.

Show the private key

Copy the private key address from the next screen.

Copied private key

Returning to MetaMask, click on the circle logo and select ‘Import Account

Import accounts

Make sure the type is Private Key and then paste in the private key you copied from Ganache.

You’ll see the account is imported with a balance of 100 ETH which matches what we saw in Ganache. You can edit your account name by clicking on the name and changing it in the dialog that opens.

Imported Account

Creating Transactions

Now we’re finally ready to start interacting with the blockchain and creating transactions.

First in Ganache choose an account you wish to send ETH to and copy the address.

Recipient Address

Back in MetaMask click the Send button as shown here.

Sending test ETH

In the ‘Add Recipient’ field paste the account address you just copied from Ganache, choose the amount of ETH you wish to send and pick a Transaction Fee. Because this is a test network the fee is irrelevant as the blocks are mined automatically but normally this fee controls how the transaction will be prioritized by miners.

Send options

You’ll now see in MetaMask the transaction and remaining balance (100 – 5 – transaction fee)

Transaction in MetaMask

The same can be seen in Ganache, so the sending account is debited with 5 ETH and the receiving account credited with 5 ETH. The transaction fee has been used by the blockchain to create the block containing the transaction.

Ganache account balances
New balances in Ganache

Selecting the transactions tab in Ganache gives you a view of all the transactions made so far.

Ganache transactions
Blockchain Transactions

Selecting the Blocks tab in Ganache gives a view of the blocks mined. So far you can see one block was automatically created when we started Ganache. This is known as the Genesis block and forms the head or root of the blockchain. Our transaction created a new block linked to this initial block. Every block created after the initial block is mathematically linked back to the previous block and so on all the way back to Genesis block (block 0).

Generated Blocks

Monitoring a Bitcoin Node Using Node-RED

Now my Bitcoin Full Node is fully synchronized and running I thought it would be good to set up some simple monitoring to check it’s still up and up to date with the Bitcoin Blockchain.

Bitnodes API

Helpfully there’s already a site that monitors all the full nodes and also provides a handy API at https://bitnodes.earn.com/ .

Bitnodes site

If you look at the full list you can see there are currently 9034 full nodes worldwide with 20 up in Denmark.

Worldwide full node list

Since the site already monitors nodes I can cheat and use their API ( https://bitnodes.earn.com/api/) to get the data on my own node. The relevant endpoint is the Node Status endpoint since this returns the UP/DOWN status as well as some other useful information such as what block height the node is currently synchronized to.

To use is very simple, call https://bitnodes.earn.com/api/v1/nodes/80.71.136.204-8333/ using your own IP and port (8333 is standard for Bitcoin Core)

A call to this endpoint for my own node returns this JSON data. I’ve highlighted the status and block height data.

{
     "hostname": "",
     "address": "80.71.136.204",
     "status": "UP",
     "data": [
         70015,
         "/Satoshi:0.18.1/",
         1567654046,
         1037,
         593492,
         "80-71-136-204.u.parknet.dk",
         "Copenhagen",
         "DK",
         55.6786,
         12.5589,
         "Europe/Copenhagen",
         "AS197301",
         "Parknet F.M.B.A"
     ],
     "bitcoin_address": "",
     "url": "",
     "verified": false,
     "mbps": "0.736781"
 }

Node-RED Integration

So now I have an easy data source, but how to get notifications from this data? This is where Node-RED comes in useful. On my server with Node-RED already installed I create a small workflow that is triggered to run every hour using the insert node.

Node-RED monitoring workflow

Once triggered the workflow does the following:

  • Call the Bitnode API
  • Parses the returned JSON to extract the status and blockchain height
  • Calls another API to get the current Bitcoin block height (to compare to my node)
  • Formats a payload and sends to Slack using a webhook

Creating a Schedule

Using the inject node it’s possible to set up a schedule for the flow to run. This could be every X seconds, minutes, hours or at a specific time. I’ve set the flow to run every hour.

Inject node set to run every hour

Parse JSON from Bitnodes

Parse JSON

This node parses the returned JSON and saves the status and current block height to variables for later use.

Get Bitcoin Block Height

To get the current height of the Bitcoin Blockchain we can use the Blockchain.info API. Call to https://blockchain.info/q/getblockcount returns the height in plain text.

Blockchain.info API

This is combined with my node data to create a message payload.

Message Payload

Post to Slack

Finally the payload is formatted ready for posting to Slack.

Slack webhook payload

Then sent via the Slack webhook.

POST to Slack via webhook

This is how the data appears in Slack. You can see the inject node in Node-RED is running the flow every hour and my node is keeping up with the Blockchain as it only every falls a few blocks behind the main chain before catching up again.

Node data posted into my Slack channel

So by using Node-RED and minimal coding in JavaScript I’ve set up an automated monitoring tool for my Bitcoin Node where I can see the progress every hour and be notified both on my desktop computer and on mobile πŸ’ͺ

Running a Bitcoin Full Node on a Raspberry Pi 4

First Attempt

I’ve wanted to run a Bitcoin full node for a while now. Not because it makes any money, quite to the contrary, it actually costs money to run a node, but to better understand a technology there’s nothing better than learning by doing 🧠

A full node is a program that fully validates transactions and blocks. Almost all full nodes also help the network by accepting transactions and blocks from other full nodes, validating those transactions and blocks, and then relaying them to further full nodes.

bitcoin.org – What is a full node?

Once you have a full node running you can also query the blockchain locally using the command line or from Python for example, plus if you allow inbound connections you are contributing to the network.

I wanted the setup to be cheap, easy and reliable so using a Raspberry Pi was the obvious choice. My first attempt was a few months ago using a Raspberry Pi 3 Model B+ I bought second hand for next to nothing. I managed to get it up and running but the strain of the initial blockchain download and synchronization (currently running at 275 GB) would frequently crash the Pi due to lack of memory. Slightly frustrated I gave up and parked the idea for a while.

A New Start

Skip forward a few months and now the Raspberry Pi 4 is available with way better performance πŸš€

Raspberry Pi 4
Raspberry Pi 4

I bought the 4 GB version that’s now running the headless (no GUI) version of Raspbian Buster Lite and is connected to an external 1 TB hard drive. I use SSH to connect so no need for mouse, keyboard or screen, just power and a network connection.

Installing

First a WARNING. Don’t do this if you have a metered or capped internet connection as you’ll be downloading the entire Bitcoin blockchain of 275 GB (currently).

I won’t cover the details for initial setup of your Raspberry Pi as there’s a full guide from the Raspberry Organization.

Likewise the Bitcoin Core install details have been extensively documented by many others, the guide I used was RaspiBolt on GitHub. I didn’t get as far as installing Lightning but the guide is excellent and unlike many guides I’ve tried works 100% if you follow every step carefully. The only deviation I made was starting my node directly on Bitcoin Mainnet (instead of testnet) by editing this one line in the bitcoin.conf so testnet=1 is commented out.

# remove the following line to enable Bitcoin mainnet
#testnet=1

The entire install including initial setup of the SD card took about 1-2 hours but you learn a lot along the way, the basic steps being:

  • Download and install Pi OS to SD card
  • Enable ssh
  • Setup your local network
  • Create users on the Pi
  • Mount external disk
  • Setup firewall and security
  • Download and install Bitcoin Core software
  • Configure and start Bitcoin Core
  • Grab a β˜• or β˜• β˜• and wait

Progress So Far

You can see below that I started the software about 4 days ago.

bitcoind

Using the bitcoin-cli command you can query the bitcoin process. You can see in the last 4 days I’ve downloaded 160030715927 bytes or about 160 GB and that my 1 TB disk is 20% filled with 173 GB of data stored.

Blockchain disk space used
Blockchain disk spaced used

To see all the possible commands type ‘bitcoin-cli help’ (there’s a lot!)

$ bitcoin-cli help
== Blockchain ==
getbestblockhash
getblock "blockhash" ( verbosity )
getblockchaininfo
getblockcount
getblockhash height
getblockheader "blockhash" ( verbose )
getblockstats hash_or_height ( stats )
getchaintips
getchaintxstats ( nblocks "blockhash" )
getdifficulty
getmempoolancestors "txid" ( verbose )
getmempooldescendants "txid" ( verbose )
getmempoolentry "txid"
getmempoolinfo
getrawmempool ( verbose )
...

To make it easier to see exactly how far I am from synchronizing the full blockchain I added this script to my Pi. Just paste this code into your text editor (nano or vi for example), save the file (as blocks.sh in my case) and make it executable with chmod +x blocks.sh

#!/bin/bash
BC_CURRENT=`bitcoin-cli getblockcount 2>&1`
BC_HEIGHT=`wget -O - http://blockchain.info/q/getblockcount 2>/dev/null`
perl -E "say sprintf('Block %s of %s (%.2f%%)', $BC_CURRENT, $BC_HEIGHT, ($BC_CURRENT/$BC_HEIGHT)*100)"

Now to see my progress I can run the script using ./blocks.sh

Initial Blockchain download progress
Initial Blockchain download progress

Almost 86% done so I’ve got a few more days to go β˜•β˜•β˜•

Uploading Files Over SSH Using KNIME

If you have SSH access to a server and want an easy, visual way of uploading files that can be automated and scheduled then using KNIME works great.

Simple file upload over ssh

Fortunately KNIME already has an SSH Connection node so the set up is very easy. The basic flow is as follows:

  • Make a connection the ssh server
  • List the files to be uploaded
  • Make URIs from the file locations
  • Upload files to the server

SSH Connection πŸ”

I recommend you always use ssh keys πŸ”‘ to connect to your server. In my case this is already set up but if you want to learn how to do this yourself see this guide. To create keys from Windows you need to install Putty and follow this guide.

Once you have your private key adding this to KNIME is straightforward. In the SSH Connection node:

  1. Add your servers IP address or hostname
  2. Select keyfile as authentication method
  3. Add the user you wish to log in as and the password of the private key (if you created one). This is not the password of the server.
  4. Browse to the location the saved private key
KNIME ssh connection configuration

Upload Files πŸ“€

This is where you select the remote location on the server where the files will be uploaded, in my case I’m using the /tmp folder.

KNIME Upload configuration

The browse dialog lets you easily select folders on your remove server.

KNIME browsing remote files

That’s all there is. Now you can automatically upload files to your server over a secure connection πŸ”

Looking Up Offset Rows in PostgreSQL

A common task when either reporting or doing data analysis is to extract data from a database table or view and then lookup corresponding values from either previous or next rows.

A good example of this was a recent KPI report I made for E-commerce where the KPI depended not only on the total daily volumes of orders received but also the the total on each of the previous two days. Therefore I needed two extra columns with the previous days order volume (p1dayorders) and the order volume from two days previously (p2dayorders).

Using the Postgres LAG() function it’s easy to achieve as you can see below. The interesting part is highlighted in red.

Postgres LAG() function

The text formatted version that can be copied is available below.

SELECT osh."Division","CreationDateTime"::date,count(distinct osh."OrderNumber") as dailyorders,
	LAG(COUNT(DISTINCT osh."OrderNumber"),1)
		OVER (PARTITION BY osh."Division"
		ORDER BY "CreationDateTime"::date) as p1dayorders,
	LAG(COUNT(distinct osh."OrderNumber"),2)
		OVER (PARTITION BY osh."Division"
		ORDER BY "CreationDateTime"::date) as p2dayorders
FROM "OrderSyncHeader" osh
WHERE "osh"."CreationDateTime" >= '2019-07-01'::date
GROUP BY osh."Division",
	 osh."CreationDateTime"::date

The resulting output shows that we now have the daily order volumes in the dailyorders column, total from the previous day in the p1dayorders column while p2dayorders shows total from two days back.

Output from the LAG() function

Note the [null] values in the first two rows. This is caused by the data being outside of the window since we do not have previous days data for the first record returned. If you wish to return another value instead of NULL this is also possible by using the optional default argument. This code will return 0 for missing values.

LAG(COUNT(DISTINCT "osh"."OrderNumber"),1,0::bigint) 

The LEAD() function as you might guess does a similar task but instead of looking back looks forward. The syntax is otherwise identical.

Postgres LEAD() function

Gives this output with n1dayorders and n2dayorders being the following one and two days order counts.

Output from the LEAD() function

The Right Tool for the Job

Arguably the wrong tool for the job

During my career I’ve heard this countless times and to some extent it’s just taken as granted that you should always use the right tool for the job. Isn’t this obvious after all? Why would you knowingly choose the wrong tool?

But the conversations often miss the realities surrounding the choice. Decisions are not made in isolation. What does best even mean? Fastest, easiest to implement, doesn’t require consultants? Instead there’s a myriad of influencing factors:

  • Currently available skills
  • Currently available tools
  • Time frames and deadlines
  • Availability and pricing of new tools
  • Learning curve for new tools
  • Cost vs benefit / ROI
  • Expected lifetime of the solution

In this day and age there’s an almost unlimited number of ways to solve a particular problem. You want to extract data from a database, process it and present the results somewhere? If you’re an analyst you might do this in Excel, a programmer might use python, a business user might use a robot and so on. All of these are possible.

Tableau Prep

A growing trend also seems to be that tools and platforms increasingly have overlapping functionality making the decision even less clear.

Tableau has Prep which is basically an ETL-lite tool, robots can pull data directly from databases, almost everything can send mails, parse JSON and XML and connect to APIs.

This often comes up when talking about Robotic Process Automation, or RPA. I’ve used many ETL tools like Alteryx, KNIME and RapidMiner and they are great at extracting and processing data, but RPA can also be used to move and process data between systems. I wouldn’t recommend using RPA to pull millions of rows of data but it could be done.

Similarly a programmer might point out that a user interface could be automated using python and selenium at a fraction of the cost of an Enterprise RPA solution. This is technically true but if you don’t have a dedicated team of python experts in your organisation what does that help?

There is always a grey area when the problem could be solved in many different ways using an array of tools and platforms each with their own trade-offs.

In my experience from business the main limitations are both financial ones and available skill sets.

Let’s say your company has invested a significant sum in RPA. You have a direct connection to the database and need to extract and process a large volume of data. You know that an ETL platform would be better but how are you going to persuade your manager to invest? If you do invest do you then have right skills to get the full value from your investment? How long will the ‘best’ solution take to get up and running?

Instead of asking if this is the best tool for the job, ask whether compared to using the ‘right’ tool:

  • Are there significant risks to success
  • What functionality will I potentially miss
  • How would the process differ
  • Can it be easily maintained/extended
  • Will it last the expected lifetime of the process

Life is full of compromises and you often need to make do with what you have. When you only have access to a hammer everything needs to look like a nail πŸ”¨πŸ”¨

WordPress File Size Limits on Nginx

So my WordPress is up an running and inevitably I hit a few roadblocks on the way. While trying to upload a video I encounter the classic ‘HTTP error’ that seems to be almost always due to file size limits on the server.

Wordpress http error on file upload
WordPress upload error

In my case the fix was simple but requires changing both setting for PHP and the Nginx web server. First fix PHP by logging into the server and running these commands.

cd /etc/php/7.2/fpm/
sudo nano php.ini

Find and edit these lines in the file, feel free to pick your own limits. Close (CTRL+X) and save (Y+ENTER) the file.

upload_max_filesize = 96M
post_max_size = 96M 

Then restart the PHP service.

sudo systemctl restart php7.2-fpm

Next we need to change the Nginx web server settings. Instead of doing this globally for the entire server I did this for my WordPress site only by editing the server block for Ngnix. Note that on my server I have the server block in the /etc/nginx/sites-available/ directory and use a symlink in the /etc/nginx/sites-enabled/ directory pointing to it. Replace your_server_name with your server name πŸ€”

cd /etc/nginx/sites-available/
sudo nano your_server_name

Simply add the line client_max_body_size 96M; to the server section of the file, close and save.

server {
         root /var/www/creativedata;
         index index.php index.html index.htm index.nginx-debian.html;
         server_name creativedata.stream www.creativedata.stream;
         client_max_body_size 96M;
         ...

Restart Nginx to load the change.

sudo systemctl restart nginx

Now when you upload media in WordPress you will see the new file size limit. Done!

WordPress file size limit

Sixteen Years of Learning

One of the greatest things about the internet is that nothing is forgotten. Of course this has also turned into one of it’s greatest risks with the rise of social media.

I used to run my own website starting in 2003 until around 2013, when I removed the site. It was written by myself in PHP with a MySQL database. Everything was hand coded from scratch including all the HTML, CSS and PHP. That’s what you can do before you have kids!

Fortunately the Wayback Machine has cached copies of almost the whole site so it’s easy to look back and see what I was playing around with back then. A virtual trip down memory lane.

Archived view of bobpeers.com

I was using Fedora Core 6 back then (I started on Fedora Core 4 if I remember correctly) which came on either as a DVD iso or spread across 6 CD iso files. You can still download it from their archives although I wouldn’t recommend it.

Fedora Core 6 archive repository
Fedora Core 6 archive

I was heavily into Linux at the time and had many pages on very specific Linux issues, mounting external logical volumes, installing Linux, installing VNC and SSH. Really nerdy stuff πŸ€“

There was also lots of general low level stuff. Connecting to IMAP and POP mail boxes using the command line. Not something you need to do every day. I also spent quite a bit of time compiling my own Linux kernels with the main aim being to decrease the boot time on my laptop. I got it down to about 15 seconds in the end πŸ”₯πŸ”₯

I don’t spend as much time with the details these days and often choose products that β€˜just work’.

I’ve got older and my time is more valuable now so I feel the need to focus more on what really gives value to learn.

The key is that these years spent β€˜playing around’ taught me an enormous amount and gave me a much deeper understanding around technology. This has been immensely valuable to me in my career even if that was not the prime driver at the time.

Hello world!

This image sums up how things are going to be around here. It won’t be narrowly focused, maybe not even focused at all. More of an experiment in ‘Doing fun things with technology’β„’.

The main purpose of this site is to have a place to store my own content. Sounds really retro, blogs are so 00’s right? But as LinkedIn, Medium, Facebook etc. made sharing content free and easy they also made it theirs. You write for them and you live by their rules. If they decide to remove, hide or edit your content there’s nothing you can do. And if they one day cease to be then it’s goodbye to all your hard work.

This site is self hosted using WordPress on my own virtual server. My site, my rules, my content.

Welcome and enjoy!