Problem Statement

Given an array of elements, write a program or function to remove any duplicate elements from the array, ensuring that each element appears only once in the resulting array. The goal is to modify the array in place, so that no additional data structures are used, and the order of the unique elements is preserved.

Method – 1 (Using Sorting)

One way to remove duplicates from an array is to sort the array first and then iterate through it, removing duplicate elements as you encounter them. Sorting the array ensures that duplicate elements are adjacent to each other, making them easy to identify and remove during the iteration.

Solution 1 (Using extra space)

In this approach, we use a hash set or dictionary (depending on the programming language) to keep track of elements that we have already seen in the input array. We iterate through the input array, and for each element, we check whether it has been seen before. If it hasn’t been seen, we add it to the result array and mark it as seen in the hash set. If it has been seen, we skip it as it’s a duplicate.

C++ Code

#include <iostream>

#include <unordered_set>

#include <vector>

std::vector<int> removeDuplicates(std::vector<int>& arr) {

std::unordered_set<int> seen;

std::vector<int> result;

for (int num : arr) {

if (seen.find(num) == seen.end()) {

seen.insert(num);

result.push_back(num);

}

}

return result;

}

int main() {

std::vector<int> inputArr = {3, 1, 2, 2, 1, 3, 5};

std::vector<int> result = removeDuplicates(inputArr);

for (int num : result) {

std::cout << num << " ";

}

return 0;

}

Java Code

import java.util.ArrayList;

import java.util.HashSet;

import java.util.List;

import java.util.Set;

public class RemoveDuplicatesFromArray {

public static List<Integer> removeDuplicates(List<Integer> arr) {

Set<Integer> seen = new HashSet<>();

List<Integer> result = new ArrayList<>();

for (int num : arr) {

if (!seen.contains(num)) {

seen.add(num);

result.add(num);

}

}

return result;

}

public static void main(String[] args) {

List<Integer> inputArr = List.of(3, 1, 2, 2, 1, 3, 5);

List<Integer> result = removeDuplicates(inputArr);

for (int num : result) {

System.out.print(num + " ");

}

}

}

Python Code

def remove_duplicates(arr):

seen = set()

result = []

for num in arr:

if num not in seen:

seen.add(num)

result.append(num)

return result

# Example usage

input_arr = [3, 1, 2, 2, 1, 3, 5]

result = remove_duplicates(input_arr)

print(result)  # Output: [3, 1, 2, 5]

Solution 2 (Without using extra space)

In this approach, we aim to remove duplicates from the array without using additional data structures like sets or dictionaries. Instead, we modify the original array in place. The key idea is to maintain two pointers, one for iterating through the array and another for placing unique elements in their correct positions in the array.

C++ Code

#include <iostream>

#include <vector>

std::vector<int> removeDuplicates(std::vector<int>& arr) {

if (arr.empty()) {

return {};

}

int n = arr.size();

int uniqueIndex = 0; // Index to track unique elements

for (int i = 1; i < n; ++i) {

bool isDuplicate = false;

// Check if the current element is a duplicate

for (int j = 0; j <= uniqueIndex; ++j) {

if (arr[i] == arr[j]) {

isDuplicate = true;

break;

}

}

// If it's not a duplicate, move it to the next unique position

if (!isDuplicate) {

++uniqueIndex;

arr[uniqueIndex] = arr[i];

}

}

// Resize the array to the number of unique elements

arr.resize(uniqueIndex + 1);

return arr;

}

int main() {

std::vector<int> inputArr = {3, 1, 2, 2, 1, 3, 5};

std::vector<int> result = removeDuplicates(inputArr);

for (int num : result) {

std::cout << num << " ";

}

return 0;

}

Java Code

import java.util.ArrayList;

import java.util.List;

public class RemoveDuplicatesFromArray {

public static List<Integer> removeDuplicates(List<Integer> arr) {

if (arr.isEmpty()) {

return new ArrayList<>();

}

int n = arr.size();

int uniqueIndex = 0; // Index to track unique elements

for (int i = 1; i < n; ++i) {

boolean isDuplicate = false;

// Check if the current element is a duplicate

for (int j = 0; j <= uniqueIndex; ++j) {

if (arr.get(i).equals(arr.get(j))) {

isDuplicate = true;

break;

}

}

// If it's not a duplicate, move it to the next unique position

if (!isDuplicate) {

++uniqueIndex;

arr.set(uniqueIndex, arr.get(i));

}

}

// Remove excess elements to keep only unique elements

arr.subList(uniqueIndex + 1, n).clear();

return arr.subList(0, uniqueIndex + 1);

}

public static void main(String[] args) {

List<Integer> inputArr = List.of(3, 1, 2, 2, 1, 3, 5);

List<Integer> result = removeDuplicates(inputArr);

for (int num : result) {

System.out.print(num + " ");

}

}

}

Python code

def remove_duplicates(arr):

if not arr:

return []

n = len(arr)

unique_index = 0  # Index to track unique elements

for i in range(1, n):

is_duplicate = False

# Check if the current element is a duplicate

for j in range(0, unique_index + 1):

if arr[i] == arr[j]:

is_duplicate = True

break

# If it's not a duplicate, move it to the next unique position

if not is_duplicate:

unique_index += 1

arr[unique_index] = arr[i]

# Slice the array to keep only unique elements

return arr[:unique_index + 1]

# Example usage:

input_arr = [3, 1, 2, 2, 1, 3, 5]

result = remove_duplicates(input_arr)

print(result)  # Output: [3, 1, 2, 5]

Method – 2 (Use of HashMaps)

In this approach, we utilize a hash map (unordered_map in C++, HashMap in Java, and dictionaries in Python) to keep track of the frequency of each element in the array. We iterate through the array, and for each element, we check if it has been encountered before. If it hasn’t, we add it to the hash map with a frequency count of 1. If it has been encountered, we skip it as it’s a duplicate.

C++ Code (Using unordered_map)

#include <iostream>

#include <vector>

#include <unordered_map>

std::vector<int> removeDuplicates(std::vector<int>& arr) {

std::unordered_map<int, int> freqMap;

std::vector<int> result;

for (int num : arr) {

if (freqMap.find(num) == freqMap.end()) {

freqMap[num] = 1; // Add to map if not seen before

result.push_back(num);

}

}

return result;

}

int main() {

std::vector<int> inputArr = {3, 1, 2, 2, 1, 3, 5};

std::vector<int> result = removeDuplicates(inputArr);

for (int num : result) {

std::cout << num << " ";

}

return 0;

}

Java Code (Using HashMap)

import java.util.ArrayList;

import java.util.HashMap;

import java.util.List;

import java.util.Map;

public class RemoveDuplicatesFromArray {

public static List<Integer> removeDuplicates(List<Integer> arr) {

Map<Integer, Integer> freqMap = new HashMap<>();

List<Integer> result = new ArrayList<>();

for (int num : arr) {

if (!freqMap.containsKey(num)) {

freqMap.put(num, 1); // Add to map if not seen before

result.add(num);

}

}

return result;

}

public static void main(String[] args) {

List<Integer> inputArr = List.of(3, 1, 2, 2, 1, 3, 5);

List<Integer> result = removeDuplicates(inputArr);

for (int num : result) {

System.out.print(num + " ");

}

}

}

Python Code (Using dictionaries)

def remove_duplicates(arr):

freq_map = {}

result = []

for num in arr:

if num not in freq_map:

freq_map[num] = 1  # Add to dictionary if not seen before

result.append(num)

return result

# Example usage:

input_arr = [3, 1, 2, 2, 1, 3, 5]

result = remove_duplicates(input_arr)

print(result)  # Output: [3, 1, 2, 5]

FAQS

1.Why do we need to remove duplicates from an array?

Removing duplicates from an array is often necessary to ensure data integrity, improve efficiency in data processing, and simplify data analysis. It’s commonly used in various programming tasks and applications.

2.What are the different methods to remove duplicates from an array?

There are several methods to remove duplicates from an array, including using hash maps, sorting and iterating, using sets or linked lists, or employing in-place techniques.

3.What is the time complexity of removing duplicates from an array using hash maps?

The time complexity for removing duplicates using hash maps is typically O(n), where n is the number of elements in the array. This is because hash maps allow constant-time (O(1)) average complexity for insertion and retrieval.

4.How can duplicates be removed from an array without using extra space?

Duplicates can be removed from an array without using extra space by employing in-place techniques. One common approach is to sort the array first and then iterate through it to remove duplicates in a single pass.

5.Is the order of elements preserved when removing duplicates from an array?

The order of elements can be preserved or not, depending on the specific method used to remove duplicates. Hash map-based methods typically preserve the order, while sorting-based methods may not.

 

 

 

Categorized in: