The Gas-Efficient Way of Building and Launching an ERC721 NFT Project For 2022

37 min readDec 17, 2021

A few housekeeping things, massive credit to all listed below:

MasonNFT, Squeebo & GoldenXNFT, Jonathan Snow, Tom Hirst, Alpha Girl Club, The Littles (WAGMI), transmissions11, Toy Boogers, Ninja Squad NFT, Pixel Gan, Open Zeppelin, Real Fake Turnips, Nuclear Nerds, 0xInuarashi, xtremetom, Cool Cats, Block Native,

Table of Contents

  • Getting Caught Up
  • The Problem and Building A Real Solution
    👉 — The Resources You May Want and Need
    👉 — Enabling the Optimizer When Compiling The Contracts
    👉 — Avoiding LTE/GTE When Possible
    👉 — Stop Catching Events Red-Handed
    👉 — Removing OpenSea Listing Fees
    👉 — Preventing Bots, Massive Losses, and Extraneous Costs
    👉 — Building With Extendable Project Proxy Approvals In Mind
    👉 — Handling Loops That Constantly Grow
    👉 — Handling Allowlisting With Merkle Trees
  • Building A Project With Each Optimized Piece
    👉 — The Thoughts of the Nuclear Nerd Founders
    👉 — What's Coming Next
  • Moving Forward

Getting Caught Up

It has been a fascinating series of events to watch as many people write articles and launch contracts following the release of How the standardized usage of Open Zeppelin smart contracts cannibalized NFTs with insane gas prices.

This piece has a few opinions on how an NFT blockchain developer should think and work when developing a smart contract on Ethereum. I realize this is one of the most dev-maxi things someone could ever say. These simple practices save a community millions of dollars. Not over months; Instantly. Yet, this article is not just for developers.

We will dive deep, and this article will be a little more verbose than the usual article that includes a few screenshots on one topic. We are looking into the core of how we work as NFT project developers on Ethereum.

Before we do anything, let's do a quick recap.

To sum it up quickly, OpenZeppelin ERC721Enumerable is wildly inefficient. There are quite a few significant inefficiencies, but specific bottlenecks significantly amplify the issue.

Others in the NFT community saw the possibilities when the information that a more optimized method exists spread but walked away with a very different take.

  • Some wrote articles claiming to be a pioneer after reverting to older methods.
  • Some projects attempted to use the savings as a marketing method only for the potential of the project to remain the same.
  • Some projects have saved their community millions of dollars and gone above and beyond.
  • Some projects said they fixed the issue when they just avoided it and introduced a slew of other problems and inadequacies.
  • Some projects failed to dig deeper than the initial knowledge, resulting in massively incorrect conclusions.

The range in quality of understandings and implementations has been incredible. Quite a few articles released were copy-pasted, so I won't bother going into detail about any specifics beyond a piece written under a very different perspective than I view blockchain development By Jonathan Snow (great name).

Beyond that article, there are also a few responses that deserve immediate attention so that we can move forward in this article with the information needed to understand why many of the proposed solutions do not solve the problem. So let's look at those.

The solution proposed by many has been to remove the ERC721Enumerable library and go back to an older method of token enumeration using Counters as suggested in this article covering another side of this issue in Cut Minting Gas Costs By Up To 70% With One Smart Contract Tweak.

For some, the fallback to a less effective solution is somewhat understandable under the argument that a quick swap of another OpenZeppelin will suffice. The developer can continue forward in times of needing speed or ease.

It is a valid argument under certain pretenses, and I failed to see that initially. There does exist a spot for the solution proposed by Jonathan, but it is not a multi-million dollar NFT collection.

  • Perhaps when prototyping.
  • Perhaps in-house usage.

Not when subjecting a segment of a few thousand buyers to increased gas costs that accumulate to several tens of thousands of dollars.

When we consider that a typical NFT project selling out right now is collecting $1M+ in a matter of minutes, this reality becomes borderline satire as developers are praised for achieving such heights of copy-pasting and failing to plan and build for the future.

  • Shouldn't the highest level of quality and effort be expected from a developer at all times?
  • Don't we all want to buy into projects from competent teams that operate at levels beyond the one needed to optimize their contract?
  • Isn't the job of a blockchain developer to know how to save their buyers the highest amount of gas while offering exceptionally high levels of security at all times?

In the end, these questions are to get your mind thinking more profound into the issue, even if you aren't a developer. These questions do not speak to the cyclical tendencies of developer emotions through economic market states; that is just nature.

So, is the idea that we take a shortcut and implement a semi-solution acceptable to you? That's a personal decision.

Multiple projects in the market have been immediately spotlighted as low-quality and low-effort, even while using a lower gas contract.

Even after using the gas savings as a primary marketing method, the copy-pasted optimization did nothing to get them across the line of making any more than a few hundred sales.

So, if you're a creator and you hear the potential of 80% gas savings and think it will be an easy sell-out. I have news for you.

Would the situation be different had the creator gone further, properly optimized/audited the contract, and built a complete project? Absolutely.

And that brings us to today, where instead of continuing this plague of ridiculous contract optimizations, I am just going to give you a more appropriate contract for the day throughout this article and all the knowledge needed to understand why it's better this way.

The Problem and Building A Real Solution

You can take the base that we've tested and optimized beyond belief so that your project can genuinely start with a more solid foundation.

With this, every creator can dig deeper into the possibilities of NFT smart contract functionality because the magnitude of savings is so high. (Attribute if you want. It doesn't matter. People will know where it came from.) There is zero way a developer can argue that the current state of Ethereum is stifling their creative ability on the blockchain to the degree that results in complacency and an utter lack of overall progress within the market. The capability exists; the developer has to care enough to optimize the platform to support their dreams.

Before diving in, we must first talk about the glaring issues with this implementation I am proposing and the nuances of managing them. We are starting with the base implementation of OpenZeppelins ERC721Enumerable.

While our solution does come with nuances, the vital thing to note here is that at no point are these nuances passed onto the buyer through poor experience or increased gas usage. That's what matters.

The problems we are immediately looking to solve with an adequately written ERC721Enumerable are:

  1. Having to store redundant/unneeded information on-chain.
  2. Managing increasingly large loops.

These issues seem pretty massive on the surface, yet they are relatively simple and don't take much longer to fix. It's just a matter of having the fortitude to implement the changes and swiftly follow with a proper suite of helper functions and tests for full coverage.

Immediately addressing the first significant issue: We dove into the core issue of storing redundant information in the first article released on this topic here: How the standardized usage of Open Zeppelin smart contracts cannibalized NFTs with insane gas prices, and the points ring loud still.

Yet, this article is less exploratory of the concept and more hands-on with the grand capabilities, past errors, and future paths.

The Resources You May Want and Need

By removing redundant information and updating the format to be more appropriate, we save a significant amount of gas.

To do this, we must understand the core functionality of ERC721Enumerable. With the primary issue at the front of our minds, let's get to work by starting with the base of OpenZeppelin contracts. We need a few things so let's pack our backpack for the day at school and hop on the bus! Make sure you grab:

We will not dive into Merkle Trees and the technicalities of how they work in this article, only top-level. If you need that info, this article is excellent! Although we won't discuss how they work today, we will discuss why we should use them.

With all the basics out of the way, it is finally time to go deeper, and we will start at step one, work our way up to the more complex items and then finalize everything by taking a look into the future and making sure we are prepared for the years to come.

Enabling the Optimizer When Compiling The Contracts

Okay, this is crazy to start with, but let me put it as simple as possible.

There exists a configuration in Solidity that will optimize the submitted code. For months, the NFT industry has seemingly stopped using this function due to what I believe is a documentation error that existed at one time. With the usage of this information disappearing for so long, there is a large subset of new developers who have never even come across this setting.

This one is funny, so I will not spend too much time on the details. Let's laugh together. There is a straightforward way to know if the developer copy-pasted an outdated library/template when minting a project.

The optimizer should always be enabled.

It takes four lines of code to enable this and saves literal thousands of dollars in gas all across the community. Four lines of code is not an exaggeration.

What will happen if we build a test system running the same contract and tests with just the optimizer enabled and disabled? We would say it's optimized and assume cost decreases if we rely on common sense. Right?

With the optimizer enabled.
With the optimizer disabled.

Such a small thing immediately saves $0.40 per mint alone. These savings propagate out to every function as well. Utilizing the optimizer isn't an optimization that only impacts the buyers.

Failing to do small things like this adds up to create a project with incredibly inefficient markets because they are so limited by the day-to-day health of the Ethereum ecosystem and the cost of gas.

Avoiding LTE/GTE When Possible

Avoiding ≤ and ≥ checks is another elementary optimization that every Solidity developer can use. Though it doesn't always make sense to implement, that's just how coding on the blockchain sometimes works. There is a fine line between having an optimized contract and an over-optimized contract. At points of efficiency, approachable and legible logic quickly get lost in a sea of confusion.

The first project I saw diving into this concept was Cool Cats, with incredible attention to even remove the need for greater-than-equal checks. Instead of using an expensive check, increase the size of the underlying variable by one and take advantage of the decreased gas usage that comes from getting to use a lighter review.

Before Cool Cats launched, this article on saving gas was highly recommended, and all these months later, it still holds water! Today the methods utilized to save gas are far more convoluted and involved than just changing a few symbols on your basic arithmetic, but it is a beautiful starting point.

Perhaps a little code will help you understand more:

With a simple increase of 1, we can avoid an extra arithmetic check and an increased mint cost.

It is that simple. With a bit of adjustment between the starting value and the way we check the maximum limit, we can save real money for every person that runs that function.

In some scenarios, it will entirely break your logic to follow this practice, which is okay. For example, it is sometimes difficult to justify increasing TOTAL_SUPPLY of an NFT collection by one due to the worry that individuals looking at the contract code may not do enough research to realize why it appears to be set one higher than the actual limit.

Additionally, doing it like this will have a few odd nuances throughout your code that will require you to check all of the limits you have in place. The last thing you want to do is mint, reveal, or burn one too many tokens. Again, a blockchain developer is not chasing ease of the job. They should at all times be tracking the delivery of the highest quality product. There are genuinely no exceptions. Doing fancy things is why it is so vital we can test everything locally before even moving to a public testnet.

Stop Catching Events Red-Handed

In Solidity, there are events that developers can "broadcast." With this, other individuals working with the on-chain data can quickly filter through events and track the state of things without logging the entire blockchain indefinitely.

Broadcasting events come at a cost, though. They are clocking in at the expense of 5k Wei, which means they aren't exactly cheap. An optimized ERC721 mint should cost approximately 50k Wei so spending a tenth of the total cost to broadcast an event that likely isn't being used is quite wasteful.

When first talking to Squeebo about the optimizations, it came up that packing a smart contract with events leads to a highly gassy situation.

Implementing many events is the tendency of some of the most popular Solidity libraries and a widespread tendency of newer Solidity developers enamored with the capabilities.

Proper management within your system requires minimal maneuvering to build a system that supports a more gas-efficient contract.

For example, let's imagine we have our centralized backend that is handling the Stage 1 reveals. As soon as a buyer mints a token, we want to reveal the metadata + image without being available a second before being minted.

A typical assumption would be that we catch the event and update our backend system. At least, that would be the standard implementation for a non-blockchain backend. Our situation isn't as sunny, though.

We have to keep in mind gas and the usage we are slamming our RPC provider with. You don't want to go to bed and wake up to a system that's crashed because you're being rate-limited by the hands of your poorly written code.

How important is this? Let's just take a look at how fast your costs can wrack up. Let's check out Infura.

Let's say we want to aim for a seamless reveal so let's try and do an instant refresh along with the grabbing of the token owner without having the events triggering a WebSocket function.

To do this, we will refresh once every 60 seconds. Immediately that is 84,000 requests a day if the clock is running for 24 hours (the case for NFT projects that haven't sold out entirely and have an open mint.) Additionally, though we have been using our provider on the front-end to connect to Ethereum so need to double our usage, and we've now reached 168,000 requests/day with very little actual interaction. That is not a sustainable structure.

Shortcut options exist now, yet they are unreliable and should not be found in any large-scale project, like NFT projects. For example, if you were willing to accept super low quality, you could use the OpenSea API and be at risk of downtime, data staleness, etc.…

Instead of creating an incredibly convoluted system, let's use some of the fantastic tools at our disposal. For this problem, we are going to use BlockNative. We will set up an event to watch for a function call that will allow us to post into a local network.

Creating a subscription for function calls is extremely easy with Block Native.

We've got a subscription running to be notified on our system any time mint() or publicMint() are called. We are just about ready to kick back and relax.

For those wondering, yes this is also the system that many Flashbotters use. You know what they say, if you can’t beat them, join them. Block Native is a well-kept secret that every blockchain developer benefits from having in their arsenal.

With the subscription prepared to go, we will save it and head to our account page. Time to set up our ngrok (or your preference) server to catch all of the incoming events from Block Native.

Spin up your server, and you are ready to catch incoming events! If you want an in-depth guide on using the product, the Block Native team has done a stellar job documenting the process.

By using a more appropriate system for the optimizations, we are improving all system parts to massive degrees. Not only are the costs lower, but the efficiency and overall capability of the system have grown far beyond a simple clock script.

Now, the ability to filter down through events and perform near-real-time off-chain processing is a powerful ability to have unlocked, especially at such a low cost.

Simply put, most of the time, you will end up with a better system by not relying on Solidity events to simplify your off-chain procedures.

Removing OpenSea Listing Fees

The knowledge of removing the approval fee that comes when listing the first token on account with OpenSea is not high-level. Implementing this results in every single NFT project community saving $60k+ within minutes of launching as the amount saved grows right alongside the unique number of buyers in the market.

Saving such extreme amounts of money is sometimes a touchy subject when running into many creators who objected to removing the OpenSea approval fee.

For example, implementing this functionality was recommended in a conversation with The Littles, a pixel-art project focused on community and family-like interaction. It made perfect sense because they were driven to build a brand focused on offering their community the best experience, project, and family/fren vibe.

Did The Littles launch with this implemented?

No. Of course, there could be many reasons, but choosing not to implement such significant savings in gas is not justifiable under any circumstances, mainly when that knowledge of a solution is had. It's not just a choice to not implement the functionality, but also a choice to charge their community $20+ per unique seller.

So, it must be very confusing to implement and test if projects aren't implementing this, right? Not at all.

Let's look at that:

To start, let's first understand how OpenSea works. When you're listing on OpenSea, you are not paying a fee to list. Not the first time, not the hundredth time. OpenSea offers a gasless service, so why do you have to pay an approval fee? The developers of some of your favorite NFT projects just don't care.

By offering the ability to have a gasless listing, OpenSea transacts with the token through a proxy account created when you create your OpenSea account.

It's essentially just another identifier of who you are (your OpenSea account), yet when the transaction is called to list the token, your collections smart contract has no way of knowing who this Proxy Account is. That is unless we tell it through an approval transaction or improved initial configuration of the contract.

With no actual state change when listing, OpenSea just needs the ability (approval) to transfer to token when it sells.

We want to save as much money as possible and avoid forcing that approval transaction. To do that, the smart contract just needs to check with OpenSea on approval to verify that the Proxy Account is yours and thus already verified!

To pull this off, we first need to implement this basic code for our contract to use:

With this, we can connect to OpenSea.

With that implemented, we can now talk to OpenSea, but we don't have any code to do that yet.

Each time a token moves owners or has its approval checked, we will first check with OpenSea to see if the request is coming from a Proxy account, and when it is, we will approve the request. If it's not a Proxy account, the transaction will continue with default functionality.

Let's get this added to the bottom of our contract:

With a single function added to your contract, you can save your community thousands.

Now we have our base functioning.

The last thing we need to do is make sure we have the OpenSea Proxy Registry addresses for the networks we will be using.

First, let's add a way to store that.

Super simple to track and make sure that everything will run all of the time smoothly.

For now, let's create an immutable address set through the contract constructor when the contract is deployed. Easy enough.

The OpenSea Proxy Registry addresses are:

  • Rinkeby: 0xf57b2c51ded3a29e6891aba85459d600256cf317
  • Mainnet: 0xa5409ec958c83c3f309868babaca7c86dcb077c1

That leaves one thing off because we still need to test our contract.

Let's now make sure we get a ProxyRegistryMock.sol added to our contract files so that when we write our tests, we can make sure that everything is in tip-top shape. This Mock won't ever be deployed. When we need to test the contract and all the functions before deploying, we will test all of this functionality locally before deploying to a testnet.

With this super basic implementation that took just a few minutes, every collection could avoid this approval fee that has plagued new collections since the beginning while savings literal tens of thousands of dollars.

It can be hard to understand the magnitude of this gas savings without breaking it down a little more simple so let's do that.

If we look at Fang Gangs Holders over time, we can see we're at the peak of 4,114 holders (At the time of writing this.) This isn't even including the number of holders that have sold through the course of history either.

Suppose every approval fee is a reasonable $15 that is already a cool $61,710. If we are a little more realistic, that quickly smashes into the scale of $100,000 lost in just OpenSea Marketplace approval fees alone. That's unacceptable.

Most shockingly, this information is not new nor difficult to implement. This comes down to developers launching without fully understanding the ecosystem they are venturing into, combined with many developers that do not care.

Most interestingly, the refusal from The Littles is not the only refusal.

What is most interesting, though, is most refusals to implement this had come from projects with extreme amounts of hype, followed by an immediate sell-out and then an unexpected failure to continue forward with the same level of quality that was delivered before minting began. Aka, many are just after your money / don't have the pure intentions as claimed.

How many of those projects that refused to implement such a simple method of saving money have released essentially nothing since launch combined with a 180-degree shift in progress from the brand built before launch? A lot of them.

Alongside this, many can plead ignorance to the implementation. However, other individuals have also refused to save this gas under the idea that saving gas somehow promotes flipping within their market. That is not correct, but this is not an article on NFT market efficiency.

The decision of whether to remove this fee is easy.

  1. Does the developer care about their community? If yes, go to #2
  2. Does the developer know you can do this? If no, go to #3
  3. Share article, method, and resources with developer

The sharing of knowledge is the only way the entire NFT community improves. As a holder, I am sure you would love to save more than $20 per collection that you want to sell a token for, wouldn't you?

Preventing Bots, Massive Losses, and Extraneous Costs

Bot prevention has not been discussed previously in this topic of significant smart contract optimizations driven by the community. However, there have been a few collections that attempted to block bots. Unfortunately, the solutions employed were quite awful and loss-inducing upon some of the most loyal members of their community.

The Littles attempted to keep out bots and instead kept our loyal fans.

Unfortunately, The Littles are yet again a perfect example of how not to launch. I don't want to appear that I am being harsh on this project for no reason; that is not the case. I hold a big bag of Littles, I like the team and the community, but we all improve together, and it's crucial we understand the damage that comes from such flippant consideration of real-world impact.

So how significant was the impact? Enormous. $1.4M burned in a blink of an eye.

Why? Because of lousy market opinions, objectively bad choices, and, more importantly, terrible development practices.

Before we can understand the solution, let's understand the problem. Every overly-hyped NFT mint has a frenzy of bots attacking the smart contract if the minting experience is profitable. Remember, this is only if the minting + gas will be cheaper than the floor on secondary. If not, the odds are high that your NFT project isn't worth botting for someone a bot developer that fetches an incredibly high hourly return.

But when the storm comes together, and there is any room for exploits, there is sure to be a developer in the industry that is skilled enough to catch you slipping. The Littles chose not to verify their contract; not super bright.

The average person can't see the code by not verifying the contract. This means it's harder for a typical buyer to inform themselves, it's harder for them to trust the team inherently, and most of all, it's harder to trust the team can deliver a high-quality system based on the inadequacy displayed by failing to build a system that doesn't hide behind the fragile wall of not being verified.

Yet, any decent developer will be able to see the code, which is precisely who not verifying the contract is/was intended to impact. For example, let's see what The Littles mint function looked like before being verified. When looking at the decoded bytecode of the contract, we can find:

This code clearly shows us that this is a sale-based time by checking these variables and the revert message that is included with it.

And that is all we need.

Now we know it's a time-based sale. Follow that up with:

They've ensured that the transaction's origin is the same as the sender so that no one can make a mass-minting contract, but people stopped using these months ago.

And we can see that the only actual measures in place are contract prevention. Meaning, this contract is ripe for mint botting.

So, we'll bust our Flashbot minter, make sure it's all taken care of, ready for the contract, and that we're all set!

We'll want to use our standard configuration to check all price variables each step of the way until we find the one with the value. (There is a better way of doing this, but it's functional.)

Run our script, and just like that, we have access as long as our bot is set to run the transactions with criteria that perform well during the drop. Now, The Littles avoided this primarily because there weren't that many competent Flashbot developers were chasing The Littles.

But, extreme precautions were taken with a last-minute change in price. This resulted in one of the largest losses in an NFT community to date. 338 ETH lost in a matter of hours in failed transactions.

For anyone counting: $1,396,696.20 was lost in failed transactions at the time of writing. Woof. If the losses weren't felt by the botters, who did feel the massive losses here?

Unfortunately, it was the most passionate followers of Littles. Seconds before the mint began, the team changed the price from .125 to .12499.

Seconds into The Littles mint there were already thousands of pending transactions.
Seconds into The Littles mint, there were already thousands of pending transactions.

Then the failures started coming in, coming in constant and steady. Changing the price forced all previously submitted transactions to fail.

Previously submit transactions, you ask? Essentially, when there is a highly hyped drop, people will submit transactions with low gas, then when the drop comes, they increase gas, and they're first in line. This is a human-driven strategy and not one a bot uses. Within seconds of mint opening, there was a record-breaking number of pending transactions, with hundreds queued before the mint even opened.

One can argue the people attempting to "front-run" deserved to have their transactions fail, and that's not something we're talking about here today solely, that The Littles did more damage to their community than botters by the low-effort attempt at a bot-protection solution.

Realistically, preventing bots is complex, and many projects put high costs into minting functions to assist with the protection. Yet, there exist a few very cheap and very effective methods.

Today we will talk about uint keys because they are relatively simple to understand and an excellent example that sometimes all you need is a different perspective.

In general, the end goal of a blockchain developer should write the most efficient and secure code. That means limiting the bloat included as much as possible. Instead of blocking bots and making sure contracts can't mint with some fancy check, we will just use better logic.

So, let's say we have a key, and we set it 09202112. We will pass in this key and pass in the key as an argument when we open the sale. When the variable is set, and the sale is enabled, minting can proceed when the user calls the function using the same key set.

It is imperative to note that this method solely focuses on slowing down botters. This is an easy and cost-effective method if you have a hyped drop and you're worried about bots dominating the minting experience.

(Yes, the most competent developers will still know how to get this data a few seconds after you set it. Making it impossible forever is not our goal; solely to make it highly inconvenient and slow.)

For a mint that is more involved with on-chain data or an extended mint period where bots need to be prohibited entirely, one may want to take it a step further and/or use a completely different method of bot-minting prevention.

Unfortunately, we cannot go through every single type of bot-minting prevention mechanism there is, but to give you a few reference points to follow for more profound research:

  • Account signatures / Signed hashes.
  • Verify the transaction origin.
  • Mint limits (Easily gamed, not the best choice.)
  • Improved distribution mechanisms that aren't dependant on a first-to-arrive system.
  • Send trigger-controlling transactions through a Private Mempool.

So to restate, there are a ton of non-loss-inducing ways to launch with a highly hyped drop. Choosing to force any losses onto the most loyal and least technologically adept is not the best practice. It can be easily avoided while saving the community more money than the lossy way does offhandedly.

Building With Extendable Project Proxy Approvals In Mind

As we step into the dawn of on-chain gaming, it is going to be more critical than ever to make sure your collection is prepared for integrability.

What's this mean?

All developers should prepare the contract to be plug-and-play if there are plans of being a future blue-chip in the gaming/more involved NFT ecosystem. This won't be a choice. Come back in a few months and thank me. Yet, how many contracts do you see launching with code that supports the teams' future dreams? Very few.

How do we know this is coming? We have finally seen the first things launch that are multi-project connecting.

The day is coming. Will your smart contract be ready to support the needed mechanisms to be part of the deeper cycle? has gone full integration-focused as they connect many of the most popular NFT projects into one pixelated Metaverse.

As this method of building and growing becomes more popular, few things contain as much inherent potential as ecosystem-integrating projects.

We've just seen one of the first projects come to life under this model, Zen Ape.

Cyber Kongz $BANANA holders that are community members of Zen Ape can exclusively access features unlocked through the utilization of an ERC20 token that comes from another collection.

That is pretty cool but can be highly gassy without the proper systems in place. This is where proxies come into play.

If/when a project launches without a proxy management system in place on the parent contract facilitating the interaction, there will be a required approval transaction that quickly wracks up in incredible fund-extraction through gas solely due to poor development practices.

If you need a little more convincing of how important this will soon be, feel free to take a look at one of the few collections currently in the running for future blue-chip status, Cool Cats. Coming from the developer Tom, we can see our future is packed with the potential of integrability:

Following the path of cross-collection integration allows for a level of community interaction chased by many dreamers. By opening the gates of extendability, entire projects can be built around concepts and features.

Let's look at this another way.

In Wreck-It Ralph, the Metaverse is the Internet. In reality, that's not the situation.

As the digital ecosystem evolves, the idea of what a metaverse is continues to grow. Few have touched a digital metaverse with a headset because the world isn't ready. The world is ready for (game) ecosystems that work with one another instead of against each other. The ecosystems that bring many communities together will always be the ones that prosper.

With the basic desire understood, let's figure out how we accomplish this. It is straightforward as the process is very similar to using the OpenSea Proxy.

In the function we built for OpenSea approval earlier, we will now check another mapping of addresses for the Collection contracts.

This comes in handy when Series 2 of a Collection burns a token from Series 1. With the pre-approval set up through a proxy methodology, all holders can avoid the frustration of paying a fee to give the contract permission to interact with the token.

There is a fantastic GAN NFT collection that required burning a previous collection…

There was no approval code in the function and no proxy system, so the holders had to run the transaction before the drop even came. It's a less-than-ideal user flow and, more importantly, a waste of money.

Immediately the support of the collection was stunted, and the same issue plagued the ability to add future collections. There is no escape from a poorly written foundational contract.

Imagine this one step bigger and then figure out how this is achieved. Imagine we want to build an RPG game, and we have a system of:

  • Tavern
  • Inventory
  • Forge

And let's say both the Tavern and Forge use tokens from the Inventory collection. If both collections want to interact, they both have to be approved. There would be no way around it unless the original contract was made to support that. We don't want our players to pay $35 before we even let them enter the door. That isn't just a bad option. That can't be an option.

As a player, we assume the Tavern and Forge are connected to the underlying system (Inventory) and know what items they can accept and return. Thus, the communication pipeline between contracts should have been the starting point of any development, right? Why is that not how most NFT projects with intricate smart contracts work today? We are still very early.

Pulling this off is relatively simple. Let's go over it. To start, we will first again update our isApprovedForAll() with an additional check of whether or not the sender is one of the collection contracts.

We have the checking implemented now, so let's go ahead and implement the precise tracking of this code.

With a simple address to boolean mapping and a toggle function, we can instantly approve the communication and interaction with cross-collection interactions without any holder having to pay an approval fee!

Just like that, we no longer have to worry about the process of having our holders go and approve a contract if they want to interact with a functionality. No one has to wait or pay for an additional transaction to process before the actual functionality can be used.

Implementing a system like this doesn't just save every buyer money but keeps an extreme amount of money in the ecosystem that can be spent within the market rather than losing that liquidity to miners as they gobble up gas fees.

Handling Loops That Constantly Grow

When first discussing the proposed optimization of rewriting ERC721Enumerable in How the standardized usage of Open Zeppelin smart contracts cannibalized NFTs with insane gas prices, it immediately became apparent that not many NFT developers have gone that deep into the core of the products that they are launching and selling.

This section will be a little more technical than the rest, so if you aren't interested in how to best manage on-chain lists that are constantly growing, you can go on to the next.

By implementing the structure built in Part One of this journey, we removed redundant information by reconfiguring the tracking of tokens through a singular _owners array of tokens in an extensively long list. As you can imagine, we should always try to minimize the amount of huge-loop work we perform on-chain and how we can mitigate that.

The primary culprit of our optimization comes into play in things like walletOfOwner() and balanceOf() where we need to loop through the entire collection.

But just because things are different than what you are used to does not mean they aren't better. Let's think about this in a different way:

“Consider the application: who is asking for balanceOf()?

90% of the time it’s an external read-only; we don’t pay for this. If you need this utility in another contract, it’s usually to check ownership.”
- Squeebo

If we take a step back, we can see that this is called off-chain the majority of the time, and when we do need this information on-chain, there are more appropriate ways to handle this without introducing massive gas costs.

For example, let's say we have a mint function that requires holding a previous collection. We need to check with the previous collection in the new contract and see if they hold a specific token id.

To do this, we will first make sure the front-end is prepared to handle basic calculations and save everyone some gas and time with:

Then in our contract, we will take that tokenId and verify it on-chain with:

As you can see, the changes required aren't massive. It's just a different way of thinking about access and ownership tracking. One final important thing to note is that the gas costs of looping through a collection of 10k tokens will be high without proper management.

Every step along the way, every developer should try their best to keep the collection scale in mind, or a situation could quickly appear out of thin air as the scale increases. Sometimes keeping the processing off-chain is impossible, and we have to tackle the problem whether we like it. Squeebo wrote an excellent article, The Dao of Solidity, that discusses a more evolved method of verifying ownership on-chain without being super gassy.

That's not it though, let's think about this one step deeper. Is the use of balanceOf() something we even want?

For a lot of situations, not really. Using it is a very Web2 way of thinking about data utilization, and there are far more unique ways to handle things efficiently.

Launching With Cost-Effective Allowlisting By Using Merkle Trees

Okay, we are getting in the weeds now, but let's keep our head above the water and understand this on a conceptual level.

If you are a developer that wants to understand this on a deeper level, I will link a lot of resources that can sum this up better than I ever could.

Developer resources:

Okay, before jumping into Merkle Tree allowlist, let us first see how a normal developer off-chain may implement an allowlist and discuss why we can't do it like that in Solidity when on the Ethereum network.

Before we can build the allowlist system that best suits your collection, we first need to figure what is we are storing on our allowlist. For example, let's look at two very different types of on-chain allowlisting.

  1. An allowlist that verifies an address is on a list.
  2. An allowlist that verifies how many times an address is on a list.

Imagine the day of allowlist minting when celebrities show up to the event. You roll out the red carpet, and you make sure every cost has been used to offer the highest experience so that they talk and spread the good name of you and all the hard work you put into it!

Are you running the type of event where they get to bring in as many guests as they like? Perhaps you're a little more exclusive, and only they can come, and their guests had to buy tickets like everyone else. The individuals on an NFT project allowlist are the early-day VIPs, and taking the effort to deliver that kind of experience provides incredible successes time after time without fail.

For this example, let's look at offering an allowlist that has a variable allowance since that is one of the most commonly desired structures among the literal hundreds of developer-job requests I've gotten in the last few weeks. Charging clients that have such primal and toddler-like desires is not even worth it. So instead, we will further open-source the method and explain how an allowance-driven NFT Merkle Tree allowlist works without being super gassy.

Let's start with an example allowlist:

On this allowlist, we have three addresses. Two with an allowance of two tokens and one with an allocation of four. With our allowlist made, we now need to implement a way to hash this information in preparation for our Merkle Tree.

So, let's build a quick Javascript function that reads our file and maps it with the hash of the allowlist information we supply:

Now that we have a way to hash our allowlist, we need to make sure our Merkle Tree pulls all of these hashes to get the root and deploy that with the contract.

With this in place, our tree has now been made with all of the allowlist information stored within it. Follow that up by getting the hex root (ex: 0x5f3bda1f522381c07506af1efaf1f6d2e6692e1e17032d80cbd3ac1cce16e70b), and we are officially ready to move that on to the contract where we can send in this same information along with a Merkle Proof to verify access at the time of minting.

We will write a test that mints a token to verify that everything functions correctly. To submit the function, we will need to submit the required variables by the Merkle Tree and the necessary proof that the contract will check against.

We know our Merkle Tree is all set up with everything running in tip-top shape! Let's go ahead and make sure we have the code in our test actually to call the mint function, and finally, we will be ready to implement this on contract.

The contract side of this is straightforward and does not require much alteration from many of the previously provided methods from the community.

As usual, we will start by building out the target functionality, and then we will come back and fill in the pieces. Immediately let's head to our whitelistMint() and make sure that we get everything set up like:

When we mint, we will pass in a count of tokens being minted, the allowance of that address, and the proof needed to verify the Merkle Tree. When the Merkle Tree confirms the message sender was initially allowed for the allocation provided in the parameters, we continue by checking that the message sender will not exceed their available allowance.

It's that simple. If everything looks good, we mint with the trusted _safeMint() as we send on our allowlisted VIPs on their way with an incredibly low-gas experience and their new token(s).

The project creators don't have to worry about a massive allowlist because it's being submitted in the form of an array or mapping. There are no high costs to absorb through the minting experience because everything is kept as simple as possible.

It is imperative to note that in this allowlist function, we've avoided the inclusion of any state variables as implementing that kind of data would fall into the bucket of redundant information as there are many more efficient ways to verify a state.

For example, for a Merkle Tree allowlist, the pseudo-state can be based on the setting of the Merkle Root. If no Merkle Root has been provided, the transaction will fail. The transactions will be processed when it's delivered (opening the sale).

Like that, we have an allowlist minting function that costs essentially the same to configure as the public sale while bringing it all together with minting costs almost identical to the public minting experience. Massive success!

Building A Project With Each Optimized Piece

With all of that covered, we are finally ready for action. For the last few months, my team and I have worked on Nuclear Nerds, an NFT project focusing on a deeper narrative experience than anything that's been launched before in this industry.

They aren't just building an NFT. They are creating an experience that has been special to the entire team and me since day one.

Inspired by their drive to achieve new heights, we initially began this journey into optimizing contracts and building a technology foundation as strong as every other part of the project. After all, we are on the blockchain. A project with a shaky technology foundation will undoubtedly fall over in the first down period.

Through the last five months, we have gone through a harrowing experience of optimizing every piece of technology and building a system that can not only support but magnify the dreams of the Nuclear Nerds team.

With a long set of functionality to roll out after launch, the focus then shifted to building the absolute best smart contract that could exist. So, everything discussed in this article has been implemented with many extra gas-saving practices in place.

If you would like to see an in-production implementation of everything we talked about today, you can head over to Etherscan and check out the contract.

Combining these optimizations will give a project the foundation needed to have a memorable time following the launch. Without the proper things in place, it is easy to fade into existence as fast as it appears. With a solid foundation that opens the gateways to dreams previously blocked off, the team isn't just unlocked, and the community is as well.

Now that we've launched let's dig into the details.

Immediately upon deploying, it was apparent our hard work would pay off. When we deployed the contract, we also immediately collected the reserves that were saved for the team. The results were shocking.

We minted 111 tokens for .11 ETH. That is ~$4 per token.

Then, we stepped into the 24 hour presale period, where people quickly realized we had put a significant amount of effort into saving an extreme amount of gas.

All the hard work came together to deliver an experience that many had never had before.

The compliments of our hard work continued to roll in, but the mint was moving slow. Neither the team nor I was worried, though. We knew what was coming. We had a few tricks up our sleeves that immediately conveyed how much we care about what we are doing and the community.

A few people who minted during the presale period then went to list their Nerd on OpenSea and realized that we had done them the favor of pre-approving their OpenSea proxy account, so they don't have to pay the fee.

Just like that, the fire was lit.

The word started spreading. We were utilizing a different method to lower the costs of minting, we didn't stop there, and people could see that.

As things built up towards the public sale, the Discord general chat of Nuclear Nerds was ripping. A fully organic way of growth resulted in a community trying to bust down the doors of everything we had running. Excitement was in the air, the nerds were getting out of control, and our servers felt the strain.

The public sale opens, and the system runs perfectly.

Without a hiccup, Nuclear Nerds began minting at a meager cost in an auspicious time of the day where gas was only 50 Gwei. We had approximately 6,000 tokens to sell, and our primary goal was preventing a gas war and failed transactions.

Through the next 35 minutes, there is a constant entry of new members into the Nuclear Nerds community as they mint. It was a steady climb in minting, but there was no warning for the spike that was soon to come.

Out of nowhere, things got hectic.

In the last minutes of the mint, the word spread far into niches Nuclear Nerds hadn't yet reached. Just like that, Nuclear Nerds had minted out, and the mint was closed.

  • No one bears massive losses incurred due to minting.
  • No one felt the pain of a gas war.
  • Nuclear Nerds did not even make it to the first page of gas consumers on Ethereum during the time of this final minting peak.
  • The most gas paid by a single person was .2 ETH, and they minted 60 Nuclear Nerds.

(If you would like to dig into the stats of Nuclear Nerds, you can do so here on our Dune Dashboard)

In total, Nuclear Nerds cost 30 ETH to mint the collection. As a reminder, The Littles burned 338 ETH in FAILED TRANSACTIONS alone that does not even include the actual cost to mint. Do you see the difference?

With $1.8M+ in gas saved (already) for Nuclear Nerds just by using the methods in this article, it should be clear why any developer that doesn't employ these methods is not as pure intentioned nor as skilled as they claim.

It is an essential and highly valid point you should be considering before buying any NFT with the premise of the team delivering something unique in the future.

The best route to follow is not always the easiest route. So what do the founders of Nuclear Nerds think about the experience so far?

The Thoughts of the Nuclear Nerd Founders

Since launch day, it has been a flurry of emotions, things to do, and conversations to have, and every single one of them has been amazing. The journey to get to this day has been a long one, and it's only getting started, but the experiences we've already had with the team will remain a memorable experience for life.

So, when I went to ask them for their thoughts today, this is what they (dc & hotshave) responded with:

When we first talked to Chance & UTC, the thing we liked most was they wanted to make sure we understood they were there to take care of ALL our technology needs. It was this simple statement that sold us. Not because they were offering a one-stop-shop, but because they implicitly understood that to make a long term project like this hum, the creative technology had to mesh seamlessly with the creative idea. They knew a team of tightly connected folks focusing on the whole, rather than a disconnected team focusing on the parts, was the way to the promised land.

By focusing on the whole, everyone understands both the broader, long term vision for the project as well as how their unique skill sets can improve it. Then everyone works together to iterate and improve.

And this, for us, is where the magic happens. We’re two long time creative guys, having made ads, films, and shorts for all kinds of different clients and all kinds of different brands in all kinds of different media. But we’re not developers, so our creative ideas and thinking in this new space are limited by what we understand to be possible.

Luckily we brought on a dev team that was eager to share and collaborate. With such a mindset, it didn’t take long before ideas took flight, flow happened and creative technologists were riffing ideas with storytelling creators. This, for us, is how any creative project gets exponentially better, giving it the opportunity to reach its full potential.

But wait…there’s more! Because the dev team wasn’t just worrying about our experience, but also that of the project’s community. And that’s where the insane contract optimization comes in, saving all those who bought a Nerd north of $1M in gas fees on mint day.

That’s a recipe for a great creative relationship, starting on the right foot from Day 1 and being prepared for the long term. And for that, we can’t thank Chance & UTC enough.

The first time reading their response, it hit me slightly differently from the normal appreciation. The Nuclear Nerds team is exceptional. Each step of the way, there has been an inherent understanding between every team member that we are always chasing the highest quality possible and doing things in extraordinary ways.

The message from the team perfectly summed up why working with people who share the same passion level is so special.

Some people search their whole lives to find one or two people they work with to that level of success. Rarely is this the case, and the Nuclear Nerds have assembled an entire team of individuals like this. I could not be more excited to explore the Wasteland with all you Nerds.

Moving Forward

The future is upon us, and the only way to improve is by putting the minds in this space together to build the absolute best ecosystem possible. When one of us wins, we all win. That's the beauty of this space, isn't it?

It's a space where merit and determination are rewarded to massive scales, and I do not just mean financially—the opportunities, the people, the experiences; all priceless. Now, as we step into the story of Nuclear Nerds, the surprises have only just begun. We're now ready to expand the Nuclear Nerd universe and bring each member inside through high-quality and memorable experiences with a solid foundation.

Putting in the work and creating a gas-efficient smart contract that supports your dreams can be what determines whether you make or break it.