The smart grid leverages information technology to augment the conventional power grid. The collection of energy-related data is a crucial aspect of the smart grid. The smart metering infrastructure, deployed throughout the smart grid, has the capability to record household energy consumption with a high degree of precision and transmit consumption data to energy providers in real-time. The consumption data can be used to provide insights useful to the optimization of the energy generation, distribution, and consumption processes, and assist in identifying usage patterns. Apart from the benefits offered to energy shareholders, these data can be also shared publicly, adhering to open science and open data principles.

Open science and open data refer to the practices of making scientific research and data publicly available. Open science promotes collaboration, sharing, and openness in scientific research, which includes making scientific literature, data, and methods freely accessible. Adopting open access policy for papers, preprints, and data enables unlimited access to scientific output. Open data, on the other hand, entails making research data accessible for use and reuse by others. Open data aim to enhance transparency and reproducibility in research, as well as facilitate scientific discoveries powered by data. The use of open data enables researchers to build on the work of others and test new theories.

Nevertheless, the availability of open data creates issues over data protection and privacy, particularly when personal information is included. Consequently, the public sharing of data can have a negative effect on consumers’ privacy and security. For example, the data can be exploited to infer personal details, such as people’s daily routines or the presence of particular appliances in a household. Therefore, consumer identification using consumption data is a considerable privacy threat. To mitigate the impact of potential identification threats, the data should be properly anonymized prior to publication.

To this end, anonymization is an approach for protecting privacy by deleting or hiding personally identifiable information from data. The purpose of anonymization is to guarantee that people cannot be identified from the data, either by itself or in conjunction with additional data. Anonymization is a crucial approach for mitigating any issues and threats associated with the sharing of data containing private or sensitive information. There are several anonymization methods that can be applied prior to the publication of the data, including suppression, bucketization, permutation, perturbation, k-anonymity, L-diversity, t-closeness, and differential privacy. These methods can effectively protect the privacy of individuals, while simultaneously enabling analysts and researchers to derive insights.

  • Suppression can be classified into three main categories, namely record suppression, value suppression, and cell suppression. Record suppression entails eliminating entire records from a dataset that contains sensitive information that could potentially identify an individual. While effective at safeguarding user privacy, this method carries the risk of losing critical data. Value suppression, on the other hand, involves replacing specific sensitive data values with more general or less sensitive values. This method allows for the preservation of other crucial information while concealing particular data values that could reveal private information about specific individuals. Cell suppression involves masking or removing individual data values within a record that could potentially identify an individual. This technique is particularly useful when only specific data values within a record are sensitive and should be suppressed while others should be retained.
  • Bucketization is a data anonymization method that groups individual values into ranges or buckets. This technique is commonly used in data science and statistics to safeguard the privacy of individuals in sensitive data sets, particularly for concealing sensitive information such as age or income. By grouping such information into ranges that represent a broader population, individual privacy can be better protected. A benefit of bucketization is that it helps to preserve the usefulness of the data set while protecting individual privacy. Compared to other anonymization techniques, bucketization reduces the amount of noise added to the data set. Additionally, bucketization can be combined with other anonymization methods such as suppression and generalization to create a more comprehensive strategy for safeguarding privacy.
  • Permutation is a common method for anonymizing data that involves shuffling or rearranging the identifying information in a data set to protect the privacy of individuals. In this approach, original values are replaced with new values that are randomly selected from the same set of values but in a different sequence. The permutation method is an effective way of anonymizing sensitive data as it prevents the reconstruction of the original data from the anonymized data. However, it is critical to ensure that the permutation method is well-designed and that the anonymized data set is accurate and consistent with the original data set. Additionally, it is important to evaluate and balance the trade-off between data privacy and data utility, as the anonymization process may result in the loss of useful information.
  • The perturbation method is a privacy-preserving technique that involves adding random noise or changes to original datasets while preserving their usefulness for analysis or research. This method is used to minimize the disclosure of sensitive information. There are different categories of the perturbation method, including noise addition to numeric records, data swapping for numeric and categorical values, synthetic data generation based on mathematical models, and micro-aggregation that generates low-risk aggregated values of attributes.
  • k-anonymity is a popular data anonymization method that involves grouping records with similar attributes. It requires that each record belongs to an equivalence class with at least k-1 other records. This ensures that the probability of identifying an individual from any record in the k-anonymized dataset is 1/k. However, k-anonymity is vulnerable to link attacks and attribute leaks, as well as attacks involving homogeneous attributes and background knowledge.
  • L-diversity is a method for anonymizing data that aims to protect privacy by reducing the granularity of data representation. The algorithm requires that an equivalence class satisfy L-diversity if the set of sensitive data corresponding to all records in that class has L acceptable values. If all equivalence classes in the dataset meet the L-diversity requirement, then the dataset itself satisfies the algorithm. When compared to its k-anonymity equivalent, a dataset that satisfies the L-diversity algorithm has significantly lower data leakage risks. However, the L-diversity model is vulnerable to similarity attacks and skewed attacks, despite taking into account the diversification of sensitive attribute values in the equivalence group. This means that an adversary can still infer the value range or sensitivity of an individual’s sensitive information.
  • The t-closeness method was developed to enhance the protection of privacy beyond what L-diversity offers by preserving the distribution of sensitive areas. This method limits the amount of individual-specific information that an observer can obtain by requiring that the distribution of a sensitive attribute in any equivalence class is similar to the attribute’s distribution in the entire table. Unlike k-anonymity, t-closeness eliminates attribute disclosure. To achieve t-closeness for an equivalence class, the difference between the distribution of a sensitive attribute in that class and the distribution of the same attribute across the entire table cannot exceed a threshold. Additionally, for a dataset to be considered t-close, all equivalence classes must meet the t-closeness requirement. Moreover, it is necessary for the distribution of a sensitive attribute in any equivalence class to closely resemble its distribution in the entire table.
  • Differential privacy is a technique designed to safeguard sensitive data while allowing the extraction of useful information from it. The objective is to ensure that the inclusion or exclusion of an individual’s information in the dataset has minimal impact on the released dataset. This is achieved by adding a certain level of noise to the data before it is released, which makes it challenging to identify whether or not a particular individual’s data is present in the dataset. It is essential to determine the appropriate level of noise to add, so as to maintain an equilibrium between preserving privacy and retaining the usefulness of the released dataset.