Core Features

Truly Distributed Storage

Nearly every NFT project completely ignores the problem of storage for the underlying NFT asset. Users incorrectly believe that NFTs are minted and exist in a decentralized manner on the blockchain. However, NFTs are really only non-fungible to the extent they refer to the actual token, not ownership of the rare asset itself. Most platforms simply outsource the storage of the actual rare asset to a third-party. In this case, the owner only really owns the non-fungible token to authenticate the file stored on a distributed protocol like IPFS or even simply the URL pointing to where the file is stored, say on Google or AWS.

Storing an NFT’s metadata and assets on a centralized server or relying on IPFS for the maintenance of files makes any creator or owner highly vulnerable to the loss of assets if the centralized entity were to shut down or if the protocol links were to go dead.

Pastel is the first NFT platform to have its own completely integrated, decentralized storage layer based on advanced technology such as RaptorQ fountain codes. Pastel ensures that the digital asset itself is uploaded, verified, and registered on the Pastel blockchain — rather than just the token with which it is minted. Through a series of smart tickets living on the Pastel ledger, artists can store their masterpieces in a distributed fashion across a variety of Supernodes as opposed to just ensuring the token is non-fungible. Our goal is to ensure that even in 100 years, the world does not lose access to a single one of the NFTs entrusted to the Pastel network.
RaptorQ for Redundant Chunk Generation

This sophisticated storage layer, leveraging the RaptorQ fountain code algorithm, begins by breaking each asset up in a series of redundant chunks. Every chunk contains certain random fragments of the combined file and is distributed redundantly across participating Supernodes running on the network.

This sophisticated storage layer, leveraging the RaptorQ fountain code algorithm, begins by breaking each asset up in a series of redundant chunks. Every chunk contains certain random fragments of the combined file and is distributed redundantly across participating Supernodes running on the network.

Kademlia for Random Chunk Distribution

The sets of chunks are then auto-distributed across the network to randomly selected Supernodes using the Kademlia DHT algorithm. This provides a useful “distance metric” that can be computed for any binary string of data and automatically eliminates the need for any of the unnecessary architecture - no complex or centralized system for deciding which node is responsible for which chunk, no iteration through Supernodes to find one with the relevant chunk, and no complicated logic for handling chunk re-allocation in the case of Supernodes entering and leaving the network.

Each chunk is uniquely identified by a SHA3-256 hash, just as is done with each Supernode identifier (i.e. its public key). We determine the binary representation of each hexadecimal string (both the chunk and Supernode identifier) and compute the XOR distance between the strings. The smaller the distance in this computation, the ‘closer’ the chunk is to the Supernode in the network. Supernodes are responsible for storing chunks that are ‘closest’ to them in the network. As new chunks are created and as Supernodes enter and leave the network, this set of “closest” chunks to a given Supernode changes. What’s achieved is a completely distributed, deterministic way to self-organize into a particular network topology using random outputs.


Despite the redundancy and self-balance introduced above, in that a replacement Supernode is automatically found when an old Supernode leaves the network, it is still conceivable that a particular chunk could be lost forever. Possibly, a very large and sudden drop in the number of available Supernodes - as a result of market forces or an attack on the network - could wipe out all the Supernodes hosting that chunk before new Supernodes can take them over.

However, if this event were to occur, there is a solution. Each chunk is uniquely determined by two items: the original data and a random seed for a given chunk. The random seed is generated when the chunks are first created. The set of these random seeds, together with the file hashes for each chunk, is also contained in the artwork registration ticket. If Supernodes on the network determine that a given chunk is no longer available, then the highest-ranked Supernode can retrieve enough LT chunks to reconstruct the original file and then use the random seed corresponding to the missing LT chunk to generate from scratch the identical chunk. This process can be verified easily by computing the file hash of the “new” chunk and checking that it matches the file hash listed for that chunk in the original artwork registration ticket on the blockchain.

Near-Duplicate Detection

The standard way many crypto projects characterize NFTs is by taking the hash of the file as the ID or fingerprint of the underlying image. The problem with this method is that someone can then copy the NFT, say a piece of digital art, and reupload it to the system. This would cause the hash to completely change, even though the file is identical.

What is needed is a way of robustly characterizing the intrinsic nature or content of an NFT into a numerical fingerprint— one that doesn’t change materially when the same file is uploaded or even if an image is modified and re-uploaded in superficial ways such as through cropping, altering colors or brightness, or applying various photoshop-type filters like edge detection or warping.

Essentially, we take the output of the second-to-last layer of each of the 5 neural net models (each of which are based on different approaches and architectures), which are basically a bunch of numbers that summarize the internal state of the model when it is stimulated by the given image. We then take these 5 differentiated output vectors and combine them into a single string which serves as the final fingerprint. The neural networks are highly sensitive to the images they are shown and have “learned” the semantic content, and thus they are effectively able to characterize the essence of the content numerically - even if various changes are made to the original image. Even if every single pixel in the original image is different from the duplicate image, the system will detect a level of similarity that would be unlikely to occur by chance.

Rareness Description
Unknown Rarity NFT is signed by the artist’s PastelID private key.
Rare to Pastel NFT is signed and not similar to any images already registered on Pastel.
Highly Rare NFT is signed, new to Pastel, and not similar to images on Google Images and other internet sources.
Unknown Rarity: These are certified Signed by the Artist NFTs. The Low rareness level in Pastel corresponds to the typical Ethereum-based NFT use case. Anyone can check the artist’s PastelID public key on the artist’s social media sites or personal website and use that public key to verify that the NFT is in fact signed by the corresponding private key that only the artist would know. This establishes the provenance of the image, and the particular block it is included in on the blockchain establishes the order and timing of the artwork registration. The underlying image file itself is also stored directly in the Pastel Network’s native storage layer, which is a big difference from Ethereum based projects. But, the basic idea of how the NFT is registered and validated by nodes into a block is the same.

The next two levels of rareness are at the level of the image data itself - not just the metadata of the NFT or whether that metadata is signed by the artist’s PastelID private key.

Rare to Pastel: This designation includes NFTs that are New to the Pastel Network. If an image is completely new to the system, it earns this special designation meaning that the NFT is not excessively similar to another NFT that has already been registered on the network by any user. This information would be included in the original art registration ticket, and would be indicated in various ways in the wallet interface so that users could easily understand at a glance which images are certified new to Pastel.

Collectors may prefer to spend more on an NFT that is rare at the level of the pixel patterns. Other users that are insensitive to the rarity of the NFT and view it as more of a digital collectable or as a way to support their favorite creators can buy the NFTs that are signed by the artist, even if they aren’t “rare” in this stricter sense.

NSFW Detection

Supernode Operators can decide in their settings which images they are willing to register on the network, and which images they are willing to store file chunks for in the Pastel storage layer. For example, a very careful, cautious Supernode operator living in the United States might decide to never register or store file chunks for any image with an NSFW score of over 0.95. A Supernode operator less sensitive to local regulations may be willing to register or store chunks for all images, even if they have a very high NSFW score.

Since NSFW scores are built into the system, from a user interface standpoint, default wallet settings allow for selecting an appropriate level to ensure that the platform accounts for local regulations and is “child friendly,” while also preserving the broader permissionless aspect of the system.

Negligible Fees

A system where only artists who can spend $500 worth of ETH to make a single artwork NFT would end up excluding most digital artists in the world. We keep the cost to register an artwork minimally below $.05 - close to the level that offsets the real-world costs of providing such service (e.g., the cost of file storage in perpetuity) - ensuring that transacting is affordable irrespective of network exhaustion or the underlying price of PSL.


Read more

Multi-Layer Structure

Read more

Built for Builders

Read more

User-Friendly Experience

Read more