Tag: electricity

What is electricity trading?

What is electricity trading?

Electricity trading is the process of power generators selling the electricity they generate to power suppliers, who can then sell this electricity on to consumers. The system operator – National Grid ESO in Great Britain – oversees the flow of electricity around the country, and ensures the amounts traded will ultimately meet demand and do not overwhelm the power system.

Who is involved in electricity trading?

There are three main parties in a power market: generators (thermal power plants and energy storage sites, sources such as wind turbines and solar panels producing electricity), consumers (hospitals, transport, homes and factories using electricity), and suppliers in the middle from whom you purchase electricity.

Electricity is generated at power stations, then bought by suppliers, who then sell it on to meet the needs of the consumers.

Electricity trading refers to the transaction between power generators, who produce electricity, and power suppliers, who sell it on to consumers.

How are electricity contracts made?

Electricity trading occurs in both long- and short-term time frames, ranging from years in advance to deals covering the same day. Generation and supply must meet exact demand for every minute of the day, which means that traders must always be ready to buy or sell power to fill any sudden gaps that arise.

When trading electricity far in advance, factors such as exchange rates, the cost and availability of fuel, changing regulations and policies all affect the price. Short-term price is more volatile, and factors such as weather, news events and even what’s on television having the biggest impact on price. 

Traders analyse live generation data and news reports, to predict ahead of time how much electricity will be needed during periods of high demand and then determine a price. Traders then make offers and bids to suppliers and strike a deal – these deals then dictate how and when a power station’s generators are run every day.

Why is electricity trading important?

Running a power station is an expensive process and demand for electricity never stops.  The electricity market ensures the country’s power demands are met, while also aiming to keep electricity businesses sustainable, through balancing the price of buying raw materials with the price at which electricity is sold.

To ensure the grid remains balanced and meets demand, the systems operator also makes deals with generators for ancillary services, either far in advance, or last-minute. This ensures elements such as frequency, voltage and reserve power are kept stable across the country and that the grid remains safe and efficient.

Electricity trading ensures there is always a supply of power and that the market for electricity operates in a stable way

Electricity trading fast facts  

Go deeper

The cost of staying in control

What: Industrial landscape with cables, pylons and train at sunset Where: Somerset, UK When: January 2016

The cost of keeping Britain’s power system stable has soared, and now adds 20% onto the cost of generating electricity.

The actions that National Grid takes to manage the power system have typically amounted to 5% of generation costs over the last decade, but this share has quadrupled over the last two years.  In the first half of 2020, the cost of these actions averaged £100 million per month.

Supplying electricity to our homes and workplaces needs more than just power stations generating electricity.

Supply and demand must be kept perfectly in balance, and flows of electricity around the country must be actively managed to keep all the interconnected components stable and prevent blackouts.  National Grid’s costs for taking these actions have been on the rise, as we reported over the previous two summers; but recently they have skyrocketed.

At the start of the decade, balancing added about £1/MWh to the cost of electricity, but last quarter it surpassed £5/MWh for the first time (see below).

Balancing prices have risen in step with the share of variable renewables.  The dashed line below shows that for every extra percent of electricity supplied by wind and solar adds 10 pence per MWh to the balancing price.  Last quarter really bucks this trend though, and balancing prices have risen 35% above the level expected from this trend.  The UK Energy Research Centre predicted that wind and solar would add up to £5/MWh to the cost of electricity due to their intermittency, and Britain has now reached this point, albeit a few years earlier than expected.

This is partly because keeping the power system stable is requiring more interventions than ever before.  With low demand and high renewable generation, National Grid is having to order more wind farms to reduce their output, at a cost of around £20 million per month.  They even had to take out a £50+ million contract to reduce the output from the Sizewell B nuclear reactor at times of system stress.

Two charts illustrating the costs of balancing Great Britain's power system

[Left] The quarterly-average cost of balancing the power system, expressed as a percentage of the cost of generation. [Right] Balancing price shown against share of variable renewables, with dots showing the average over each quarter

A second reason for the price rise is that National Grid’s costs of balancing are passed on to generators and consumers, who pay per MWh.  As demand has fallen by a sixth since the beginning of the coronavirus pandemic, the increased costs are being shared out among a smaller baseOfgem has stepped in to cap the balancing service charges at a maximum of £10/MWh until late October.  Their COVID support scheme will defer up to £100 million of charges until the following year.

For a quarter of a century, the electricity demand in GB ranged from 19 to 58 GW*.  Historically, demand minus the intermittent output of wind and solar farms never fell below 14 GW.  However, in each month from April to June this year, this ‘net demand’ fell below 7 GW.

Just as a McLaren sports car is happier going at 70 than 20 mph, the national grid is now being forced to operate well outside its comfort zone.

This highlights the importance of the work that National Grid must do towards their ambition to be ready for a zero-carbon system by 2025.  The fact we are hitting these limits now, rather than in a few years’ time is a direct result of COVID.  Running the system right at its limits is having a short-term financial impact, and is teaching us lessons for the long-term about how to run a leaner and highly-renewable power system.

Chart: Minimum net demand (demand minus wind and solar output) in each quarter since 1990

Minimum net demand (demand minus wind and solar output) in each quarter since 1990


Read full Report (PDF)   |  Read full Report   |   Read press release


Front cover of Drax Electric Insights Q2 2020 report

Electric Insights Q2 2020 report [click to view/download]

The ideas and tricks inside Great Britain’s plugs

Rewiring a UK 13 amp domestic electric plug

It may be bulkier than its foreign cousins and its flat back might make it the perfect household booby trap, but the UK plug is a modern-day design marvel.

The UK’s ‘G Type’ (or BS 1363) plug is a product of the post-war age. But it has endured for the better part of a century, ensuring homes, business and sockets around the UK have access to safe, usable electricity. Even as the devices they power have changed, become smarter and more connected, the three-prong G Type remains unchanged.

But to understand how it came into being, it’s worth first understanding what makes it such a unique and clever bit of design – including its role in achieving the ambitions of one of Great Britain’s pioneering female engineers, and its money-saving abilities.

What makes Great Britain’s plugs special

The modern plug used across Great Britain (as well as Ireland, Cyprus, Hong Kong and Malaysia) is a smarter and more advanced item than many of its contemporaries. This is thanks to a number of key, but often overlooked, features.

The UK plug is a

A collection of international power point illustrations

The first is its earth prong. Connecting the plug to the earth means if a wire comes loose in, say, a toaster and touches a metal part, the device will short circuit as the electricity runs through to make contact with the earth, rather than the entire item becoming electrified and dangerous.

The longer earth prong also plays the role of ‘gatekeeper’ for the entire plug. When a plug enters a socket the longer earth prong enters before any others, pushing back plastic shutters that sit over the live and neutral entrances. This means when there is no plug in a socket the live and neutral ports, which actually carry electric current to devices, are covered over making it very difficult for a child to push anything dangerous into the socket.

Infographic: What makes the UK plug special?

What makes the UK plug special? [click to download]

Another clever feature inside the Great British plug comes in the form of a fuse connected to the live wire. If there’s an unexpected electrical surge the fuse will blow and cut off the connected device, preventing fires and electrocutions.

All packaged together, the G Type plug is far from the most compact version – yet it is hugely effective. However, these ideas didn’t come together at the flick of a switch.

Pre-war plugs

Going back to end of the 19th Century, the idea of owning devices you could move around your house and connect to the electricity circuit from different rooms was novel.

Electricity’s main role in homes was for lighting and was fixed into walls and ceilings, with their cables hidden. It wasn’t until the rise of new electrical appliances in the 20th century that the need for an easy way to plug electrical items into circuits arose.

A series of two-pronged plugs first emerged in 1883, but there was no standardisation of design which would allow any appliance to be plugged into any socket. That began to change in 1904, when US inventor Harvey Hubbell developed a plug that allowed non-bulb electrical devices to be connected into an existing light socket, eliminating the need for the installation of new sockets.

By 1911, a design for a three-pronged plug with an earth connection had emerged, with manufacturer AP Lundberg bringing the first of this kind of plug to Britain.

By 1934, regulations appeared requiring plugs and sockets to include an earthing prong, which eventually gave birth to a three, cylindrical-pronged plug: the BS 546.

BS 546 plugs

The BS 546 was different from the modern G-Type as they didn’t contain a fuse and were available in five different sizes depending on the needs of the appliance, from small 2 ampere plugs for low-power appliances to a larger 30 ampere version for industrial machinery. The different sizes and spacing of the prongs prevented low-power devices accidentally being plugged into high-power outlets.

Any chance of globally standardising plugs was doomed from the beginning, as different companies in different countries all began developing their own plugs for their products as electricity rapidly gained uptake.

Some attempts were made by the International Electrotechnical Commission (IEC) to standardise plugs globally but the Second World War put a stop to any progress.   

Electricity for the people

Great Britain emerged from the Second World War with its national grid standing strong. The challenge now was to make electricity not just the power source of factories and wealthy people’s homes, but something available for everyone in the wave of new post-war construction.

Other countries, not seeing as much damage to their housing stock as the UK, did not have the same opportunity to rethink domestic electricity to such an extent. Therefore, the Institution of Electrical Engineers (IEE) assembled a 20-person committee to consider the electrical requirements of the country’s new homes.

Caroline Haslett

Breaking new ground: Caroline Haslett

The sole woman on the committee, Caroline Haslett, had been breaking new ground for female engineers since before World War One. In her career she worked with turbine inventor Charles Parsons and his wife (an engineer in her own right) and in 1932 became the first woman selected to join the IEE. Her passion for electricity went so far as for her will to request she be cremated via electricity.

She had long believed in the potential for domestic electricity to improve women’s lives by freeing them from the drudgery of pre-electric domestic chores, from handwashing clothes to cooking on coal-fuelled stoves. This included ensuring electricity was safe for the people using electricity around the home which in the 1940s was primarily women, who also did the vast majority of childcare.

Haslett’s drive to make electricity safe in the home was pivotal in shaping many of the IEE’s safety requirements for post-war domestic electricity, including what have become the country’s standard plugs and sockets.

There was another factor aside from safety at play. The material cost of the war meant copper, the main material used in electrical wiring, was in short supply, so the IEE came up with a new way of wiring homes that would in turn shape our plugs.

Shifting fuses to save copper

Before the war, British sockets were all separately wired back to a central fuse box. It made sense, because if something went wrong only the fuse connected to that socket would blow rather than the whole house.

However, to cut the amount of copper used the IEE instead proposed a clever workaround where the home’s electrical sockets are looped up in one Ring Circuit, with the fuses moved to the plugs themselves. So, if something went wrong in an appliance the fault would stop at the plug, where the fuse could easily be accessed and replaced. Lighting fixtures remained wired in a separate circuit from sockets as they require less current to operate.

Copper Wiring

Copper was short in supply during World War Two.

This hidden fuse, is a big differentiator from other plug types and adds to the G Type’s safety credentials. However, the IEE had to ensure people did not mistakenly insert older three-prong BS 546 plug styles without fuses into the sockets.

The answer was as simple as switching the socket holes from round to rectangle. It means the older round cylinder prongs wouldn’t fit into the slot designed for the rectangles found on plugs today.

The G Type plug might seem cumbersome compared to the European or US models, but in the 70-plus years since its introduction its three prongs and in-built fuse, has proved an enduring design that can power new devices and smart technology, while remaining one of the safest plugs in the world.

4 of the longest running electrical objects

How long do your electrical devices last? We’re not talking about battery life, but the overall lifetime of the items we use every day that are powered by electricity.

It’s accepted that today’s electrical devices have short life spans, in part a symptom of rapidly evolving technology fuelling the need for constant consumer updates and in part a result of planned obsolescence (devices being manufactured to fail within a set number of years to encourage repeat purchases). Electrical devices aren’t purchased with the belief they will last a lifetime.

But it hasn’t always been this way. Before rapid technological development and the rise of fast consumerism, devices were built to last.

Over the relatively short history of electrical appliances, there are tools and equipment that have operated for decades. Some of these remain in operation today with hardly any alterations, but for a few tweaks here and there to upgrade or preserve.

Built to last, here are a few of the longest running electrical inventions.

The Oxford Electric Bell located in the Clarendon Laboratory, University of Oxford.

1840 – The Oxford Electric Bell

The Oxford Electric Bell is not your typical bell – not just in how it looks, but in the fact it has been in constant operation since the mid 19th Century. It consists of two primitive batteries called ‘dry piles’ with bells fitted at each end and a metal ball that vibrates between them to very quietly, continuously ring.

Its original purpose is unidentified, but what is known is that the bell is the result of an experiment put on by the London instrument-manufacturing firm Watkins and Hill in 1840. Acquired by Robert Walker, a physics professor at the University of Oxford in the mid 1800s, it’s displayed at Oxford’s Clarendon Laboratory which explains why it’s also known as the Clarendon Pile.

The exact make-up of the dry piles is unknown, as no one wants to tamper with them to investigate their composition out for fear of ending the bell’s 179-year-long streak. As a result, confusion remains as to why The Oxford Electric Bell has remained in operation for so long.

Souter Lighthouse, Tyneside, England.

1871 – Souter Lighthouse in South Shields, UK

The lamp in the Souter lighthouse, situated between the rivers Tyne and Wear, was the most advanced of its day when it was first constructed. Designed to use an alternating electric current, it was the first purpose-built, electrically powered lighthouse in the world. Although no longer in operation today, it ran unchanged for nearly 50 years.

The light was generated using carbon arc lamps, and it originally produced a beam of red light that would come on once every five seconds.

Souter’s original lamp operated unchanged from 1871 to 1914, when it was replaced by more conventional oil lamps. It was altered again to run on mains electric power in 1952 and was finally deactivated in 1988.

1896 – The Isle of Man’s Manx Electric Railway

Tourism hit the Isle of Man in the 1880s and with it came the construction of hotels and boarding houses. Two businessmen saw this as an opportunity to purchase a large estate on the island and develop it into housing and a pleasure development. The Manx Parliament approved the sale in 1892 on one condition: that a road and a tramway be built to give people access.

Snaefell mountain railway station, Isle of Man.

It was decided that the tram would be electric, and work began in the spring of 1893, with the tram system up and running by September of that year. Although the track and its cars have been extended and updated over time, the first three cars remain the longest running electric tramcars in the world.

Photograph by Dick Jones (centennialbulb.org)

1902 – The Centennial Bulb

The unassuming Centennial Bulb has been working in the Livermore, California Fire Department for 117 years. The bulb was first installed in 1902 in the department’s hose cart house, but was later moved to Livermore’s Fire Station 6, where it has been illuminated for more than a million hours.

Throughout its life the Centennial Bulb has seen just two interruptions: for a week in 1937 when the Firehouse was refurbished, and in May 2013 when it was off for nine and a half hours due to a failed power supply. Made by the Shelby Electric Company, the hand-blown bulb previously shone at 60 watts but has since been dimmed to 4 watts.

While this means it isn’t able to actually illuminate much, it is a reminder that despite the disposable nature of many modern electrical devices, it’s possible to build electrical items that last.

14 moments that electrified history

Electricity is such a universal and accepted part of our lives it’s become something we take for granted. Rarely do we stop to consider the path it took to become ubiquitous, and yet through the course of its history there have been several eureka moments and breakthrough inventions that have shaped our modern lives. Here are some of the defining moments in the development of electricity and power.

2750 BC – Electricity first recorded in the form of electric fish

Ancient Egyptians referred to electric catfish as the ‘thunderers of the Nile’, and were fascinated by these creatures. It led to a near millennia of wonder and intrigue, including conducting and documenting crude experiments, such as touching the fish with an iron rod to cause electric shocks.

500 BC – The discovery of static electricity

Around 500 BC Thales of Miletus discovered that static electricity could be made by rubbing lightweight objects such as fur or feathers on amber. This static effect remained unknown for almost 2,000 years until around 1600 AD, when William Gilbert discovered static electricity in earnest.

1600 AD – The origins of the word ‘electricity’

The Latin word ‘electricus’, which translates to ‘of amber’ was used by the English physician, William Gilbert to describe the force exerted when items are rubbed together. A few years later, English scientist Thomas Browne translated this into ‘electricity’ in his written investigations in the field.

1751 – Benjamin Franklin’s ‘Experiments and Observations on Electricity’

This book of Benjamin Franklin’s discoveries made about the behaviour of electricity was published in 1751. The publication and translation of American founding father, scientist and inventor’s letters would provide the basis for all further electricity experimentation. It also introduced a host of new terms to the field including positive, negative, charge, battery and electric shock.

1765 – James Watt transforms the Industrial Revolution

Watt studies Newcomen’s engine

James Watt transformed the Industrial Revolution with the invention of a modified Newcome engine, now known as the Watt steam engine. Machines no longer had to rely on the sometimes-temperamental wind, water or manpower – instead steam from boiling water could drive the pistons back and forth. Although Watt’s engine didn’t generate electricity, it created a foundation that would eventually lead to the steam turbine – still the basis of much of the globe’s electricity generation today.

James Watt’s steam engine

Alessandro Volta

1800 – Volta’s first true battery

Documented records of battery-like objects date back to 250 BC, but the first true battery was invented by Italian scientist Alessandro Volta in 1800. Volta realised that a current was created when zinc and silver were immersed in an electrolyte – the principal on which chemical batteries are still based today.

1800s – The first electrical cars

Breakthroughs in electric motors and batteries in the early 1800s led to experimentation with electrically powered vehicles. The British inventor Robert Anderson is often credited with developing the first crude electric carriage at the beginning of the 19th century, but it would not be until 1890 that American chemist William Morrison would invent the first practical electric car (though it closer resembled a motorised wagon), boasting a top speed of 14 miles per hour.

Michael Faraday

1831 – Michael Faraday’s electric dynamo

Faraday’s invention of the electric dynamo power generator set the precedent for electricity generation for centuries to come. His invention converted motive (or mechanical) power – such as steam, gas, water and wind turbines – into electromagnetic power at a low voltage. Although rudimentary, it was a breakthrough in generating consistent, continuous electricity, and opened the door for the likes of Thomas Edison and Joseph Swan, whose subsequent discoveries would make large-scale electricity generation feasible.

1879 – Lighting becomes practical and inexpensive

Thomas Edison patented the first practical and accessible incandescent light bulb, using a carbonised bamboo filament which could burn for more than 1,200 hours. Edison made the first public demonstration of his incandescent lightbulb on 31st December 1879 where he stated that, “electricity would be so cheap that only the rich would burn candles.” Although he was not the only inventor to experiment with incandescent light, his was the most enduring and practical. He would soon go on to develop not only the bulb, but an entire electrical lighting system.

Holborn Viaduct power station via Wikimedia

1882 – The world’s first public power station opens

Holborn Viaduct power station, also known as the Edison Electric Light Station, burnt coal to drive a steam turbine and generate electricity. The power was used for Holborn’s newly electrified streetlighting, an idea which would quickly spread around London.

1880s – Tesla and Edison’s current war

Nikola Tesla and Thomas Edison waged what came to be known as the current war in 1880s America. Tesla was determined to prove that alternating current (AC) – as is generated at power stations – was safe for domestic use, going against the Edison Group’s opinion that a direct current (DC) – as delivered from a battery – was safer and more reliable.

Inside an Edison power station in New York

The conflict led to years of risky demonstrations and experiments, including one where Tesla electrocuted himself in front of an audience to prove he would not be harmed. The war continued as they fought over the future of electric power generation until eventually AC won.

Nikola Tesla

1901 – Great Britain’s first industrial power station opens

Before Charles Mertz and William McLellan of Merz & McLellan built the Neptune Bank Power Station in Tyneside in 1901, individual factories were powered by private generators. By contrast, the Neptune Bank Power Station could supply reliable, cheap power to multiple factories that were connected through high-voltage transmission lines. This was the beginning of Britain’s national grid system.

1990s – The first mass market electrical vehicle (EV)

Concepts for electric cars had been around for a century, however, the General Motors EV1 was the first model to be mass produced by a major car brand – made possible with the breakthrough invention of the rechargeable battery. However, this EV1 model could not be purchased, only directly leased on a monthly contract. Because of this, its expensive build, and relatively small customer following, the model only lasted six years before General Motors crushed the majority of their cars.

2018 – Renewable generation accounts for a third of global power capacity

The International Renewable Energy Agency’s (IRENA) 2018 annual statistics revealed that renewable energy accounted for a third of global power capacity in 2018. Globally, total renewable electricity generation capacity reached 2,351 GW at the end of 2018, with hydropower accounting for almost half of that total, while wind and solar energy accounted for most of the remainder.

Breaking circuits to keep electricity safe

Electric relay with sparks jumping between the contacts doe to breaking a heavy inductive load.

Electricity networks around the world differ many ways, from the frequency they run at to the fuels they’re powered by, to the infrastructure they run on. But they all share at least one core component: circuits.

A circuit allows an electrical current to flow from one point to another, moving it around the grid to seamlessly power street lights, domestic devices and heavy industry. Without them electricity would have nowhere to flow and no means of reaching the things it needs to power.

But electricity can be volatile, and when something goes wrong it’s often on circuits that problems first manifest. That’s where circuit breakers come in. These devices can jump into action and break a circuit, cutting off electricity flow to the faulty circuit and preventing catastrophe in homes and at grid scale. “All this must be done in milliseconds,” says Drax Electrical Engineer Jamie Beardsall.

But to fully understand exactly how circuit breakers save the day, it’s important to understand how and why circuits works.

Circuits within circuits 

Circuits work thanks to the natural properties of electricity, which always wants to flow from a high voltage to a lower one. In the case of a battery or mains plug this means there are always two sides: a negative side with a voltage of zero and a positive side with a higher voltage.

In a simple circuit electricity flows in a current along a conductive path from the positive side, where there is a voltage, to the negative side, where there is a lower or no voltage. The amount of current flowing depends on both the voltage applied, and the size of the load within the circuit.

We’re able to make use of this flow of electricity by adding electrical devices – for example a lightbulb – to the circuit. When the electricity moves through the circuit it also passes through the device, in turn powering it. 

A row of switched on household electrical circuit breakers on a wall panel

A row of switched on household electrical circuit breakers on a wall panel

The national grid, your regional power distributor, our homes, businesses and more are all composed of multiple circuits that enable the flow of electricity. This means that if one circuit fails (for example if a tree branch falls on a transmission cable), only that circuit is affected, rather than the entire nation’s electricity connection. At a smaller scale, if one light bulb in a house blows it will only affect that circuit, not the entire building.

And while the cause of failures on circuits may vary from fallen tree branches, to serious wiring faults to too many high-voltage appliances plugged into a single circuit, causing currents to shoot up and overload circuits, the solution to preventing them is almost always the same. 

Fuses and circuit breakers

In homes, circuits are often protected from dangerously high currents by fuses, which in Great Britain are normally found in standard three-pin plugs and fuse boxes. In a three pin plug each fuse contains a small wire – or element.

One electrical fuse on electronic circuit background

An electrical fuse

When electricity passes through the circuit (and fuse), it heats up the wire. But if the current running through the circuit gets too high the wire overheats and disintegrates, breaking the circuit and preventing the wires and devices attached to it from being damaged. When a fuse like this breaks in a plug or a fuse box it must be replaced. A circuit breaker, however, can carry out this task again and again.

Instead of a piece of wire, circuit breakers use an electromagnetic switch. When the circuit breaker is on, the current flows through two points of contact. When the current is at a normal level the adjacent electromagnet is not strong enough to separate the contact points. However, if the current increases to a dangerous level the electromagnet is triggered to kick into action and pulls one contact point away, breaking the circuit and opening the circuit breaker.

Another approach to fuses is using a strip made of two different types of metals. As current increases and temperatures rise, one metal expands faster than the other, causing the strip to bend and break the circuit. Once the connection is broken the strip cools, allowing the circuit breaker to be reset.

This approach means the problem on the circuit can be identified and solved, for example by unplugging a high-voltage appliance from the circuit before flipping the switch back on and reconnecting the circuit.

Protecting generators at grid scale 

Power circuit breakers for a high-voltage network

Circuit breakers are important in residential circuits, but at grid level they become even more crucial in preventing wide-scale damage to the transmission system and electricity generators.

If part of a transmission circuit is damaged, for example by high winds blowing over a power line, the current flow within that circuit can be disrupted and can flow to earth rather than to its intended load or destination. This is what is known as a short circuit.

Much like in the home, a short circuit can result in dangerous increases in current with the potential to damage equipment in the circuit or nearby. Equipment used in transmission circuits can cost millions of pounds to replace, so it is important this current flow is stopped as quickly as possible.

“Circuit breakers are the light switches of the transmission system,” says Beardsall.

“They must operate within milliseconds of an abnormal condition being detected. However, In terms of similarities with the home, this is where it ends.”

Current levels in the home are small – usually below 13 amps (A or ampere) for an individual circuit, with the total current coming into a home rarely exceeding 80A.

In a transmission system, current levels are much higher. Beardsall explains: “A single transmission circuit can have current flows in excess of 2,000A and voltages up to 400,000 Volts. Because the current flowing through the transmission system is much greater than that around a home, breaking the circuit and stopping the current flow becomes much harder.”

A small air gap is enough to break a circuit at a domestic level, but at grid-scale voltage is so high it can arc over air gaps, creating a visible plasma bridge. To suppress this the contact points of the circuit breakers used in transmission systems are often contained in housings filled with insulating gases or within a vacuum, which are not conductive and help to break the circuit.

A 400kV circuit breaker on the Drax Power Station site

A 400kV circuit breaker on the Drax Power Station site

In addition, there will often be several contact points within a single circuit breaker to help break the high current and voltage levels. Older circuit breakers used oil or high-pressure air for breaking current, although these are now largely obsolete.

In a transmission system, circuit breakers will usually be triggered by relays – devices which measure the current flowing through the circuit and trigger a command to open the circuit breaker if the current exceeds a pre-determined value. “The whole process,” says Beardsall, “from the abnormal current being detected to the circuit breaker being opened can occur in under 100 milliseconds.”

Circuit breakers are not only used for emergencies though, they can also be activated to shut off parts of the grid or equipment for maintenance, or to direct power flows to different areas.

A single circuit breaker used within the home would typically be small enough to fit in your hand.  A single circuit breaker used within the transmission system may well be bigger than your home.

Circuit breakers are a key piece of equipment in use at Drax Power Station, just as they are within your home. Largely un-noticed, the largest power station in the UK has hundreds of circuit breakers installed all around the site.

A 3300 Volt circuit breaker at Drax Power Station

A 3300 Volt circuit breaker at Drax Power Station

“They provide protection for everything from individual circuits powering pumps, fans and fuel conveyors, right through to protecting the main 660 megawatt (MW) generators, allowing either individual items of plant to be disconnected or enabling full generating units to be disconnected from the National Grid,” explains Beardsall.

The circuit breakers used at Drax in North Yorkshire vary significantly. Operating at voltages from 415 Volts right up to 400,000 Volts, they vary in size from something like a washing machine to something taller than a double decker bus.

Although the size, capacity and scale of the circuit breakers varies dramatically, all perform the same function – allowing different parts of electrical circuits to be switched on and off and ensuring electrical system faults are isolated as quickly as possible to keep damage and danger to people to a minimum.

While the voltages and amount of current is much larger at a power station than in any home, the approach to quickly breaking a circuit remains the same. While circuits are integral parts of any power system, they would mean nothing without a failsafe way of breaking them.

What is net zero?

Skyscraper vertical forest in Milan

For age-old rivals Glasgow and Edinburgh, the race to the top has taken a sharp turn downwards. Instead, they’re in a race to the bottom to earn the title of the first ‘net zero’ carbon city in the UK.

While they might be battling to be the first in the UK to reach net zero, they are far from the only cities with net zero in their sights. In the wake of the growing climate emergency, cities, companies and countries around the world have all announced their own ambitions for hitting ‘net zero’.

It has become a global focus based on necessity – for the world to hit the Paris Agreement targets and limit global temperature rise to under two degrees Celsius, it’s predicted the world must become net zero by 2070.

Yet despite its ubiquity, net zero is a term that’s not always fully understood. So, what does net zero actually mean?

Glasgow, Scotland. Host of COP26.

What does net zero mean?

‘Going net zero’ most often refers specifically to reaching net zero carbon emissions. But this doesn’t just mean cutting all emissions down to zero.

Instead, net zero describes a state where the greenhouse gas (GHG) emitted [*] and removed by a company, geographic area or facility is in balance.

In practice, this means that as well as making efforts to reduce its emissions, an entity must capture, absorb or offset an equal amount of carbon from the atmosphere to the amount it releases. The result is that the carbon it emits is the same as the amount it removes, so it does not increase carbon levels in the atmosphere. Its carbon contributions are effectively zero – or more specifically, net zero.

The Grantham Research Institute on Climate Change and the Environment likens the net zero target to running a bath – an ideal level of water can be achieved by either turning down the taps (the mechanism adding emissions) or draining some of the water from the bathtub (the thing removing of emissions from the atmosphere). If these two things are equally matched, the water level in the bath doesn’t change.

To reach net zero and drive a sustained effort to combat climate change, a similar overall balance between emissions produced and emissions removed from the atmosphere must be achieved.

But while the analogy of a bath might make it sound simple, actually reaching net zero at the scale necessary will take significant work across industries, countries and governments.

How to achieve net zero

The UK’s Committee on Climate Change (CCC) recommends that to reach net zero all industries must be widely decarbonised, heavy good vehicles must switch to low-carbon fuel sources, and a fifth of agricultural land must change to alternative uses that bolster emission reductions, such as biomass production.

However, given the nature of many of these industries (and others considered ‘hard-to-treat’, such as aviation and manufacturing), completely eliminating emissions is often difficult or even impossible. Instead, residual emissions must be counterbalanced by natural or engineered solutions.

Natural solutions can include afforestation (planting new forests) and reforestation (replanting trees in areas that were previous forestland), which use trees’ natural ability to absorb carbon from the atmosphere to offset emissions.

On the other hand, engineering solutions such as carbon capture usage and storage (CCUS) can capture and permanently store carbon from industry before it’s released into the atmosphere. It is estimated this technology can capture in excess of 90% of the carbon released by fossil fuels during power generation or industrial processes such as cement production.

Negative emissions essential to achieving net zero

Click to view/download graphic. Source: Zero Carbon Humber.

Bioenergy with carbon capture and storage (BECCS) could actually take this a step further and lead to a net removal of carbon emissions from the atmosphere, often referred to as negative emissions. BECCS combines the use of biomass as a fuel source with CCUS. When that biomass comes from trees grown in responsibly managed working forests that absorb carbon, it becomes a low carbon fuel. When this process is combined with CCUS and the carbon emissions are captured at point of the biomass’ use, the overall process removes more carbon than is released, creating ‘negative emissions’.

According to the Global CCS Institute, BECCS is quickly emerging as the best solution to decarbonise emission-heavy industries. A joint report by The Royal Academy of Engineering and Royal Society estimates that BECCS could help the UK to capture 50 million tonnes of carbon per year by 2050 – eliminating almost half of the emissions projected to remain in the economy.

The UK’s move to net zero

In June 2019, the UK became the first major global economy to pass a law to reduce all greenhouse gas emissions to net zero by 2050. It is one of a small group of countries, including France and Sweden, that have enacted this ambition into law, forcing the government to take action towards meeting net zero.

Electrical radiator

Although this is an ambitious target, the UK is making steady progress towards it. In 2018 the UK’s emissions were 44% below 1990 levels, while some of the most intensive industries are fast decarbonising – June 2019 saw the carbon content of electricity hit an all-time low, falling below 100 g/kWh for the first time. This is especially important as the shift to net zero will create a much greater demand for electricity as fossil fuel use in transport and home heating must be switched with power from the grid.

Hitting net zero will take more than just this consistent reduction in emissions, however. An increase in capture and removal technologies will also be required. On the whole, the CCC predict an estimated 75 to 175 million tonnes of carbon and equivalent emissions will need to be removed by CCUS solutions annually in 2050 to fully meet the UK’s net zero target.

This will need substantial financial backing. The CCC forecasts that, at present, a net zero target can be reached at an annual resource cost of up to 1-2% of GDP between now and 2050. However, there is still much debate about the role the global carbon markets need to play to facilitate a more cost-effective and efficient way for countries to work together through market mechanisms.

Industries across the UK are starting to take affirmative action to work towards the net zero target. In the energy sector, projects such as Drax Power Station’s carbon capture pilots are turning BECCS increasingly into a reality ready to be deployed at scale.

Along with these individual projects, reaching net zero also requires greater cooperation across the industrial sectors. The Zero Carbon Humber partnership between energy companies, industrial emitters and local organisations, for example, aims to deliver the UK’s first zero carbon industrial cluster in the Humber region by the mid-2020s.

Nonetheless, efforts from all sectors must be made to ensure that the UK stays on course to meet all its immediate and long-term emissions targets. And regardless of whether or not Edinburgh or Glasgow realise their net zero goals first, the competition demonstrates how important the idea of net zero has become and society’s drive for real change across the UK.

Drax has announced an ambition to become carbon negative by 2030 – removing more carbon from the atmosphere than produced in our operations, creating a negative carbon footprint. Track our progress at Towards Carbon Negative.

[*] In this article we’ve simplified our explanation of net zero. Carbon dioxide (CO2) is the most abundant greenhouse gas (GHG). It is also a long-lived GHG that creates warming that persists in the long term. Although the land and ocean absorb it, a significant proportion stays in the atmosphere for centuries or even millennia causing climate change. It is, therefore, the most important GHG to abate. Other long-lived GHGs include include nitrous oxide (N2O, lifetime of circa 120 years) and some F-Gasses (e.g. SF6 with a lifetime of circa 3,200 years). GHGs are often aggregated as carbon dioxide equivalent (abbreviated as CO2e or CO2eq) and it is this that net zero targets measure. In this article, ‘carbon’ is used for simplicity and as a proxy for ‘carbon dioxide’, ‘CO2‘, ‘GHGs’ or ‘CO2e’.

What is a fuel cell and how will they help power the future?

How do you get a drink in space? That was one of the challenges for NASA in the 1960s and 70s when its Gemini and Apollo programmes were first preparing to take humans into space.

The answer, it turned out, surprisingly lay in the electricity source of the capsules’ control modules. Primitive by today’s standard, these panels were powered by what are known as fuel cells, which combined hydrogen and oxygen to generate electricity. The by-product of this reaction is heat but also water – pure enough for astronauts to drink.

Fuel cells offered NASA a much better option than the clunky batteries and inefficient solar arrays of the 1960s, and today they still remain on the forefront of energy technology, presenting the opportunity to clean up roads, power buildings and even help to reduce and carbon dioxide (CO2) emissions from power stations.

Power through reaction

At its most basic, a fuel cell is a device that uses a fuel source to generate electricity through a series of chemical reactions.

All fuel cells consist of three segments, two catalytic electrodes – a negatively charged anode on one side and a positively charged cathode on the other, and an electrolyte separating them. In a simple fuel cell, hydrogen, the most abundant element in the universe, is pumped to one electrode and oxygen to the other. Two different reactions then occur at the interfaces between the segments which generates electricity and water.

What allows this reaction to generate electricity is the electrolyte, which selectively transports charged particles from one electrode to the other. These charged molecules link the two reactions at the cathode and anode together and allow the overall reaction to occur. When the chemicals fed into the cell react at the electrodes, it creates an electrical current that can be harnessed as a power source.

Many different kinds of chemicals can be used in a fuel cell, such as natural gas or propane instead of hydrogen. A fuel cell is usually named based on the electrolyte used. Different electrolytes selectively transport different molecules across. The catalysts at either side are specialised to ensure that the correct reactions can occur at a fast enough rate.

For the Apollo missions, for example, NASA used alkaline fuel cells with potassium hydroxide electrolytes, but other types such as phosphoric acids, molten carbonates, or even solid ceramic electrolytes also exist.

The by-products to come out of a fuel cell all depend on what goes into it, however, their ability to generate electricity while creating few emissions, means they could have a key role to play in decarbonisation.

Fuel cells as a battery alternative

Fuel cells, like batteries, can store potential energy (in the form of chemicals), and then quickly produce an electrical current when needed. Their key difference, however, is that while batteries will eventually run out of power and need to be recharged, fuel cells will continue to function and produce electricity so long as there is fuel being fed in.

One of the most promising uses for fuel cells as an alternative to batteries is in electric vehicles.

Rachel Grima, a Research and Innovation Engineer at Drax, explains:

“Because it’s so light, hydrogen has a lot of potential when it comes to larger vehicles, like trucks and boats. Whereas battery-powered trucks are more difficult to design because they’re so heavy.”

These vehicles can pull in oxygen from the surrounding air to react with the stored hydrogen, producing only heat and water vapour as waste products. Which – coupled with an expanding network of hydrogen fuelling stations around the UK, Europe and US – makes them a transport fuel with a potentially big future.

 

Fuel cells, in conjunction with electrolysers, can also operate as large-scale storage option. Electrolysers operate in reverse to fuel cells, using excess electricity from the grid to produce hydrogen from water and storing it until it’s needed. When there is demand for electricity, the hydrogen is released and electricity generation begins in the fuel cell.

A project on the islands of Orkney is using the excess electricity generated by local, community-owned wind turbines to power a electrolyser and store hydrogen, that can be transported to fuel cells around the archipelago.

Fuel cells’ ability to take chemicals and generate electricity is also leading to experiments at Drax for one of the most important areas in energy today: carbon capture.

Turning COto power

Drax is already piloting bioenergy carbon capture and storage technologies, but fuel cells offer the unique ability to capture and use carbon while also adding another form of electricity generation to Drax Power Station.

“We’re looking at using a molten carbonate fuel cell that operates on natural gas, oxygen and CO2,” says Grima. “It’s basic chemistry that we can exploit to do carbon capture.”

The molten carbonate, a 600 degrees Celsius liquid made up of either lithium potassium or lithiumsodium carbonate sits in a ceramic matrix and functions as the electrolyte in the fuel cell. Natural gas and steam enter on one side and pass through a reformer that converts them into hydrogen and CO2.

On the other side, flue gas – the emissions (including biogenic CO2) which normally enter the atmosphere from Drax’s biomass units – is captured and fed into the cell alongside air from the atmosphere. The CO2and oxygen (O2) pass over the electrode where they form carbonate (CO32-) which is transported across the electrolyte to then react with the hydrogen (H2), creating an electrical charge.

“It’s like combining an open cycle gas turbine (OCGT) with carbon capture,” says Grima. “It has the electrical efficiency of an OCGT. But the difference is it captures COfrom our biomass units as well as its own CO2.”

Along with capturing and using CO2, the fuel cell also reduces nitrogen oxides (NOx) emissions from the flue gas, some of which are destroyed when the O2and CO2 react at the electrode.

From the side of the cell where flue gas enters a CO2-depleted gas is released. On the other side of the cell the by-products are water and CO2.

During a government-supported front end engineering and design (FEED) study starting this spring, this COwill also be captured, then fed through a pipeline running from Drax Power Station into the greenhouse of a nearby salad grower. Here it will act to accelerate the growth of tomatoes.

The partnership between Drax, FuelCell Energy, P3P Partners and the Department of Business, Energy and Industrial Strategy could provide an additional opportunity for the UK’s biggest renewable power generator to deploy bioenergy carbon capture usage and storage (BECCUS) at scale in the mid 2020s.

From powering space ships in the 70s to offering greenhouse-gas free transport, fuel cells continue to advance. As low-carbon electricity sources become more important they’re set to play a bigger role yet.

Learn more about carbon capture, usage and storage in our series: