Data Structures and algorithms
Awesome choice! Data structures and algorithms form the backbone of computer science and are crucial for technical interviews. Let's dive into some key topics and practice problems.
Key Topics
Arrays and Strings: Basic operations, common problems, and optimization techniques.
Linked Lists: Singly linked lists, doubly linked lists, and common algorithms.
Stacks and Queues: Implementation and usage in various problems.
Trees and Graphs: Binary trees, binary search trees, AVL trees, graph traversal algorithms (DFS, BFS).
Hashing: Hash tables, hash functions, collision handling.
Sorting and Searching: Common algorithms like QuickSort, MergeSort, Binary Search.
Dynamic Programming: Concepts and classic problems.
Recursion: Understanding and solving recursive problems.
Bit Manipulation: Techniques and common problems.
Complexity Analysis: Big O notation, time and space complexity.
Algorithm and Flowchart Overview
An algorithm is a step-by-step procedure or formula for solving a problem, while a flowchart is a graphical representation of an algorithm. They are critical tools in programming and problem-solving, helping to visualize and plan out solutions effectively.
Pseudocode is a way to describe an algorithm using plain language and simple, high-level commands without the need for strict syntax rules of any specific programming language. It’s a useful tool for planning and communicating algorithms before jumping into actual code. The primary goal of pseudocode is to make the logic of the algorithm clear and understandable for anyone, even those who may not be familiar with programming.
Characteristics of Pseudocode:
Simplicity: Written in plain language.
Readability: Easy to read and understand by humans.
Language-agnostic: Not specific to any programming language.
Structured: Often resembles structured programming logic (like loops, conditionals)
Time complexity refers to the computational complexity that describes the amount of time an algorithm takes to run as a function of the length of the input. It's used to estimate the efficiency of an algorithm. The most common time complexities are:
- O(1): Constant time – execution time does not depend on the input size.
- O(log n): Logarithmic time – efficiency grows with the logarithm of input size (e.g., binary search).
- O(n): Linear time – time grows directly proportional to the input size.
- O(n log n): Log-linear time – common in efficient sorting algorithms like merge sort and quicksort.
- O(n²): Quadratic time – often seen in algorithms with nested loops (e.g., bubble sort).
- O(2ⁿ): Exponential time – grows rapidly, seen in brute-force solutions (e.g., subset problems).
- O(n!): Factorial time – extremely inefficient, like solving the traveling salesman problem using brute force.
Use Case
- Best Case: The algorithm performs the minimum number of steps.
- Worst Case: The maximum steps required by the algorithm.
- Average Case: Average steps across all inputs.
1. Arrays
Definition: A collection of elements identified by index or key. They are stored in contiguous memory locations.
Key Operations: Access, update, insert, delete.
Time Complexity: Access O(1), Insert/Delete O(n).
Use Cases: Storing multiple items of the same type, such as a list of scores, names, or any other collection of data.
Definition: A collection of elements identified by index or key. They are stored in contiguous memory locations.
Key Operations: Access, update, insert, delete.
Time Complexity: Access O(1), Insert/Delete O(n).
Use Cases: Storing multiple items of the same type, such as a list of scores, names, or any other collection of data.
Example:
arr = [1, 2, 3, 4, 5]
print(arr[0]) # Access first element
arr[1] = 20 # Update second element
arr.append(6) # Insert at the end
arr.pop() # Remove last element
2. Linked Lists
Definition: A linear collection of data elements where the linear order is not given by their physical placement in memory. Instead, each element points to the next.
Types: Singly Linked List (each node points to the next node), Doubly Linked List (each node points to the next and previous nodes).
Key Operations: Insertion, deletion, traversal.
Time Complexity: Access O(n), Insert/Delete O(1).
Use Cases: Dynamic memory allocation, ease of insertion/deletion.
Definition: A linear collection of data elements where the linear order is not given by their physical placement in memory. Instead, each element points to the next.
Types: Singly Linked List (each node points to the next node), Doubly Linked List (each node points to the next and previous nodes).
Key Operations: Insertion, deletion, traversal.
Time Complexity: Access O(n), Insert/Delete O(1).
Use Cases: Dynamic memory allocation, ease of insertion/deletion.
Example (Singly Linked List):
class Node:
def __init__(self, data):
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def append(self, data):
new_node = Node(data)
if not self.head:
self.head = new_node
else:
last = self.head
while last.next:
last = last.next
last.next = new_node
def print_list(self):
current = self.head
while current:
print(current.data, end=" -> ")
current = current.next
# Usage
ll = LinkedList()
ll.append(1)
ll.append(2)
ll.print_list() # Output: 1 -> 2 ->
3. Stacks
Definition: A linear data structure that follows the Last In, First Out (LIFO) principle.
Key Operations: Push (insert an element), Pop (remove an element), Peek (retrieve the top element).
Time Complexity: Access O(n), Insert/Delete O(1).
Use Cases: Function call management in recursion, expression evaluation and syntax parsing.
Definition: A linear data structure that follows the Last In, First Out (LIFO) principle.
Key Operations: Push (insert an element), Pop (remove an element), Peek (retrieve the top element).
Time Complexity: Access O(n), Insert/Delete O(1).
Use Cases: Function call management in recursion, expression evaluation and syntax parsing.
Example:
stack = []
stack.append(1) # Push
stack.append(2)
print(stack.pop()) # Output: 2 (Pop)
print(stack[-1]) # Output: 1 (Peek)
4. Queues
Definition: A linear data structure that follows the First In, First Out (FIFO) principle.
Types: Simple queue, circular queue, priority queue.
Key Operations: Enqueue (insert an element), Dequeue (remove an element), Front (retrieve the first element).
Time Complexity: Access O(n), Insert/Delete O(1).
Use Cases: Order of service in systems (like printers, CPU scheduling), breadth-first search (BFS) in trees/graphs.
Definition: A linear data structure that follows the First In, First Out (FIFO) principle.
Types: Simple queue, circular queue, priority queue.
Key Operations: Enqueue (insert an element), Dequeue (remove an element), Front (retrieve the first element).
Time Complexity: Access O(n), Insert/Delete O(1).
Use Cases: Order of service in systems (like printers, CPU scheduling), breadth-first search (BFS) in trees/graphs.
Example:
from collections import deque
queue = deque()
queue.append(1) # Enqueue
queue.append(2)
print(queue.popleft()) # Output: 1 (Dequeue)
print(queue[0]) # Output: 2 (Front)
5. Trees
Definition: A hierarchical data structure consisting of nodes, with a root node and child nodes, forming a parent-child relationship.
Types: Binary Tree, Binary Search Tree, AVL Tree, etc.
Key Operations: Insertion, deletion, traversal (in-order, pre-order, post-order).
Time Complexity: Access/Insert/Delete O(log n) for balanced trees.
Use Cases: Hierarchical data storage (file systems, XML data), searching and sorting algorithms.
Definition: A hierarchical data structure consisting of nodes, with a root node and child nodes, forming a parent-child relationship.
Types: Binary Tree, Binary Search Tree, AVL Tree, etc.
Key Operations: Insertion, deletion, traversal (in-order, pre-order, post-order).
Time Complexity: Access/Insert/Delete O(log n) for balanced trees.
Use Cases: Hierarchical data storage (file systems, XML data), searching and sorting algorithms.
Example (Binary Tree):
class TreeNode:
def __init__(self, data):
self.data = data
self.left = None
self.right = None
def inorder_traversal(root):
if root:
inorder_traversal(root.left)
print(root.data, end=" ")
inorder_traversal(root.right)
# Usage
root = TreeNode(1)
root.left = TreeNode(2)
root.right = TreeNode(3)
inorder_traversal(root) # Output: 2 1 3
6. Graphs
Definition: A collection of nodes (vertices) and edges connecting pairs of nodes. Can be directed or undirected.
Key Operations: Add vertex, add edge, traversal (DFS, BFS), shortest path algorithms (Dijkstra, Floyd-Warshall).
Time Complexity: Varies based on operations and implementation.
Use Cases: Networking, social networks, pathfinding in maps.
Definition: A collection of nodes (vertices) and edges connecting pairs of nodes. Can be directed or undirected.
Key Operations: Add vertex, add edge, traversal (DFS, BFS), shortest path algorithms (Dijkstra, Floyd-Warshall).
Time Complexity: Varies based on operations and implementation.
Use Cases: Networking, social networks, pathfinding in maps.
Example:
from collections import defaultdict
class Graph:
def __init__(self):
self.graph = defaultdict(list)
def add_edge(self, u, v):
self.graph[u].append(v)
def bfs(self, start):
visited = [False] * (len(self.graph))
queue = deque([start])
visited[start] = True
while queue:
node = queue.popleft()
print(node, end=" ")
for neighbor in self.graph[node]:
if not visited[neighbor]:
queue.append(neighbor)
visited[neighbor] = True
# Usage
g = Graph()
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 2)
g.add_edge(2, 0)
g.add_edge(2, 3)
g.add_edge(3, 3)
g.bfs(2) # Output: 2 0 3 1
7. Hashing
Definition: The process of mapping data to a fixed-size integer using hash functions, used to implement hash tables (dictionaries in Python).
Key Operations: Insertion, deletion, search.
Time Complexity: O(1) on average for insert, delete, search.
Use Cases: Implementing associative arrays, database indexing.
Definition: The process of mapping data to a fixed-size integer using hash functions, used to implement hash tables (dictionaries in Python).
Key Operations: Insertion, deletion, search.
Time Complexity: O(1) on average for insert, delete, search.
Use Cases: Implementing associative arrays, database indexing.
Example:
hash_table = {}
hash_table['key1'] = 'value1'
hash_table['key2'] = 'value2'
print(hash_table['key1']) # Output: value1
print(hash_table.get('key2')) # Output: value2
8. Heaps
Definition: A specialized tree-based data structure that satisfies the heap property. Max Heap: parent nodes are greater than or equal to their children. Min Heap: parent nodes are less than or equal to their children.
Key Operations: Insert, delete (extract max/min).
Time Complexity: O(log n) for insert, delete.
Use Cases: Priority queue implementation, order statistics.
Definition: A specialized tree-based data structure that satisfies the heap property. Max Heap: parent nodes are greater than or equal to their children. Min Heap: parent nodes are less than or equal to their children.
Key Operations: Insert, delete (extract max/min).
Time Complexity: O(log n) for insert, delete.
Use Cases: Priority queue implementation, order statistics.
Example (Min Heap using heapq):
import heapq
heap = []
heapq.heappush(heap, 10)
heapq.heappush(heap, 1)
heapq.heappush(heap, 5)
print(heapq.heappop(heap)) # Output: 1 (smallest element)
print(heap[0]) # Output: 5 (next smallest element)
9. Tries
Definition: A tree-like data structure used to store a dynamic set of strings where the keys are usually strings.
Key Operations: Insert, search, delete.
Time Complexity: O(n) for insert, search, where n is the length of the key.
Use Cases: Auto-complete systems, spell-checking, IP routing.
Example:
pythonclass TrieNode:
def __init__(self):
self.children = {}
self.is_end_of_word = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word):
current = self.root
for letter in word:
if letter not in current.children:
current.children[letter] = TrieNode()
current = current.children[letter]
current.is_end_of_word = True
def search(self, word):
current = self.root
for letter in word:
if letter not in current.children:
return False
current = current.children[letter]
return current.is_end_of_word
# Usage
trie = Trie()
trie.insert("hello")
print(trie.search("hello")) # Output: True
print(trie.search("hell")) # Output: False
These are the fundamental data structures that are essential for any programmer to master. Each has its own strengths, weaknesses, and use cases.
Here’s a brief overview of some popular algorithms associated with key data structures:
Definition: A tree-like data structure used to store a dynamic set of strings where the keys are usually strings.
Key Operations: Insert, search, delete.
Time Complexity: O(n) for insert, search, where n is the length of the key.
Use Cases: Auto-complete systems, spell-checking, IP routing.
Example:
class TrieNode:
def __init__(self):
self.children = {}
self.is_end_of_word = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word):
current = self.root
for letter in word:
if letter not in current.children:
current.children[letter] = TrieNode()
current = current.children[letter]
current.is_end_of_word = True
def search(self, word):
current = self.root
for letter in word:
if letter not in current.children:
return False
current = current.children[letter]
return current.is_end_of_word
# Usage
trie = Trie()
trie.insert("hello")
print(trie.search("hello")) # Output: True
print(trie.search("hell")) # Output: False
These are the fundamental data structures that are essential for any programmer to master. Each has its own strengths, weaknesses, and use cases.
Here’s a brief overview of some popular algorithms associated with key data structures:
1. Arrays
Sorting Algorithms:
QuickSort:
Description: Divides the array into smaller sub-arrays based on a pivot element and sorts them recursively.
Time Complexity: Average case O(n log n), worst case O(n^2).
MergeSort:
Description: Divides the array into halves, sorts them, and then merges the sorted halves.
Time Complexity: O(n log n) in all cases.
BubbleSort:
Description: Repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
Time Complexity: O(n^2).
HeapSort:
Description: Builds a heap from the array and then repeatedly extracts the maximum element.
Time Complexity: O(n log n).
QuickSort:
Description: Divides the array into smaller sub-arrays based on a pivot element and sorts them recursively.
Time Complexity: Average case O(n log n), worst case O(n^2).
MergeSort:
Description: Divides the array into halves, sorts them, and then merges the sorted halves.
Time Complexity: O(n log n) in all cases.
BubbleSort:
Description: Repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
Time Complexity: O(n^2).
HeapSort:
Description: Builds a heap from the array and then repeatedly extracts the maximum element.
Time Complexity: O(n log n).
2. Linked Lists
Traversal and Manipulation:
Reversing a Linked List:
Description: Reverses the direction of a singly linked list.
Time Complexity: O(n).
Detecting a Cycle (Floyd’s Tortoise and Hare):
Description: Uses two pointers moving at different speeds to detect a cycle in a linked list.
Time Complexity: O(n).
Merging Two Sorted Lists:
Description: Combines two sorted linked lists into a single sorted list.
Time Complexity: O(n).
Reversing a Linked List:
Description: Reverses the direction of a singly linked list.
Time Complexity: O(n).
Detecting a Cycle (Floyd’s Tortoise and Hare):
Description: Uses two pointers moving at different speeds to detect a cycle in a linked list.
Time Complexity: O(n).
Merging Two Sorted Lists:
Description: Combines two sorted linked lists into a single sorted list.
Time Complexity: O(n).
3. Stacks
Stack Utilization:
Balancing Parentheses:
Description: Uses a stack to check for balanced parentheses in an expression.
Time Complexity: O(n).
Infix to Postfix Conversion:
Description: Uses a stack to convert infix expressions to postfix notation.
Time Complexity: O(n).
Evaluate Postfix Expression:
Description: Uses a stack to evaluate expressions written in postfix notation.
Time Complexity: O(n).
Balancing Parentheses:
Description: Uses a stack to check for balanced parentheses in an expression.
Time Complexity: O(n).
Infix to Postfix Conversion:
Description: Uses a stack to convert infix expressions to postfix notation.
Time Complexity: O(n).
Evaluate Postfix Expression:
Description: Uses a stack to evaluate expressions written in postfix notation.
Time Complexity: O(n).
4. Queues
Queue Applications:
Breadth-First Search (BFS) in Graphs:
Description: Uses a queue to explore all neighbors of a node before moving to the next level.
Time Complexity: O(V + E), where V is vertices and E is edges.
Implementing a Cache (LRU Cache):
Description: Uses a queue and a hash map to implement Least Recently Used (LRU) cache eviction policy.
Time Complexity: O(1) for get and put operations.
Breadth-First Search (BFS) in Graphs:
Description: Uses a queue to explore all neighbors of a node before moving to the next level.
Time Complexity: O(V + E), where V is vertices and E is edges.
Implementing a Cache (LRU Cache):
Description: Uses a queue and a hash map to implement Least Recently Used (LRU) cache eviction policy.
Time Complexity: O(1) for get and put operations.
5. Trees
Tree Traversal:
In-order Traversal:
Description: Visits nodes in the left subtree, the root node, and then the right subtree.
Time Complexity: O(n).
Pre-order Traversal:
Description: Visits the root node first, then the left subtree, and finally the right subtree.
Time Complexity: O(n).
Post-order Traversal:
Description: Visits the left subtree, the right subtree, and then the root node.
Time Complexity: O(n).
In-order Traversal:
Description: Visits nodes in the left subtree, the root node, and then the right subtree.
Time Complexity: O(n).
Pre-order Traversal:
Description: Visits the root node first, then the left subtree, and finally the right subtree.
Time Complexity: O(n).
Post-order Traversal:
Description: Visits the left subtree, the right subtree, and then the root node.
Time Complexity: O(n).
Tree Manipulation:
Insertion in a Binary Search Tree (BST):
Description: Inserts a new node while maintaining the BST property.
Time Complexity: O(log n) on average.
Deletion in a BST:
Description: Removes a node and restructures the tree to maintain the BST property.
Time Complexity: O(log n) on average.
Insertion in a Binary Search Tree (BST):
Description: Inserts a new node while maintaining the BST property.
Time Complexity: O(log n) on average.
Deletion in a BST:
Description: Removes a node and restructures the tree to maintain the BST property.
Time Complexity: O(log n) on average.
6. Graphs
Graph Algorithms:
Depth-First Search (DFS):
Description: Explores as far as possible along each branch before backtracking.
Time Complexity: O(V + E).
Dijkstra’s Algorithm:
Description: Finds the shortest path from a single source vertex to all other vertices.
Time Complexity: O(V^2) with a simple implementation, O(V + E log V) with a priority queue.
Kruskal’s Algorithm:
Description: Finds the minimum spanning tree of a graph.
Time Complexity: O(E log E).
Depth-First Search (DFS):
Description: Explores as far as possible along each branch before backtracking.
Time Complexity: O(V + E).
Dijkstra’s Algorithm:
Description: Finds the shortest path from a single source vertex to all other vertices.
Time Complexity: O(V^2) with a simple implementation, O(V + E log V) with a priority queue.
Kruskal’s Algorithm:
Description: Finds the minimum spanning tree of a graph.
Time Complexity: O(E log E).
7. Hashing
Hash Table Operations:
Direct Addressing:
Description: Uses an array to store key-value pairs directly based on the key.
Time Complexity: O(1).
Hash Functions:
Description: Maps keys to indices in an array for efficient access.
Time Complexity: O(1).
Collision Handling (Chaining and Open Addressing):
Description: Manages key collisions using linked lists (chaining) or probing (open addressing).
Time Complexity: O(1) on average.
Direct Addressing:
Description: Uses an array to store key-value pairs directly based on the key.
Time Complexity: O(1).
Hash Functions:
Description: Maps keys to indices in an array for efficient access.
Time Complexity: O(1).
Collision Handling (Chaining and Open Addressing):
Description: Manages key collisions using linked lists (chaining) or probing (open addressing).
Time Complexity: O(1) on average.
8. Heaps
Heap Operations:
Heapify:
Description: Converts an array into a heap.
Time Complexity: O(n).
Extract Max/Min:
Description: Removes and returns the maximum (or minimum) element in the heap, then re-heapifies.
Time Complexity: O(log n).
Insert into Heap:
Description: Adds a new element and maintains the heap property.
Time Complexity: O(log n).
Heapify:
Description: Converts an array into a heap.
Time Complexity: O(n).
Extract Max/Min:
Description: Removes and returns the maximum (or minimum) element in the heap, then re-heapifies.
Time Complexity: O(log n).
Insert into Heap:
Description: Adds a new element and maintains the heap property.
Time Complexity: O(log n).
9. Tries
Trie Operations:
Insert:
Description: Inserts a word into the trie.
Time Complexity: O(n) where n is the length of the word.
Search:
Description: Checks if a word exists in the trie.
Time Complexity: O(n).
Prefix Search:
Description: Checks if there exists any word in the trie that starts with the given prefix.
Time Complexity: O(n).
Insert:
Description: Inserts a word into the trie.
Time Complexity: O(n) where n is the length of the word.
Search:
Description: Checks if a word exists in the trie.
Time Complexity: O(n).
Prefix Search:
Description: Checks if there exists any word in the trie that starts with the given prefix.
Time Complexity: O(n).
Arrays
1. QuickSort
pythondef quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
# Example usage
arr = [3, 6, 8, 10, 1, 2, 1]
print(quicksort(arr)) # Output: [1, 1, 2, 3, 6, 8, 10]
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
# Example usage
arr = [3, 6, 8, 10, 1, 2, 1]
print(quicksort(arr)) # Output: [1, 1, 2, 3, 6, 8, 10]
2. MergeSort
pythondef merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]
merge_sort(left_half)
merge_sort(right_half)
i = j = k = 0
while i < len(left_half) and j < len(right_half):
if left_half[i] < right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1
while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1
while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1
# Example usage
arr = [3, 6, 8, 10, 1, 2, 1]
merge_sort(arr)
print(arr) # Output: [1, 1, 2, 3, 6, 8, 10]
def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]
merge_sort(left_half)
merge_sort(right_half)
i = j = k = 0
while i < len(left_half) and j < len(right_half):
if left_half[i] < right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1
while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1
while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1
# Example usage
arr = [3, 6, 8, 10, 1, 2, 1]
merge_sort(arr)
print(arr) # Output: [1, 1, 2, 3, 6, 8, 10]
3. BubbleSort
pythondef bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
# Example usage
arr = [64, 34, 25, 12, 22, 11, 90]
bubble_sort(arr)
print(arr) # Output: [11, 12, 22, 25, 34, 64, 90]
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
# Example usage
arr = [64, 34, 25, 12, 22, 11, 90]
bubble_sort(arr)
print(arr) # Output: [11, 12, 22, 25, 34, 64, 90]
Linked Lists
1. Reversing a Linked List
pythonclass ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def reverse_list(head):
prev = None
current = head
while current:
next_node = current.next
current.next = prev
prev = current
current = next_node
return prev
# Example usage
head = ListNode(1, ListNode(2, ListNode(3, ListNode(4))))
reversed_head = reverse_list(head)
# Now reversed_head points to the reversed linked list
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def reverse_list(head):
prev = None
current = head
while current:
next_node = current.next
current.next = prev
prev = current
current = next_node
return prev
# Example usage
head = ListNode(1, ListNode(2, ListNode(3, ListNode(4))))
reversed_head = reverse_list(head)
# Now reversed_head points to the reversed linked list
Stacks
1. Balancing Parentheses
pythondef is_balanced(expression):
stack = []
for char in expression:
if char in "([{":
stack.append(char)
elif char in ")]}":
if not stack:
return False
top = stack.pop()
if (char == ")" and top != "(") or (char == "]" and top != "[") or (char == "}" and top != "{"):
return False
return not stack
# Example usage
expression = "{[()]}"
print(is_balanced(expression)) # Output: True
def is_balanced(expression):
stack = []
for char in expression:
if char in "([{":
stack.append(char)
elif char in ")]}":
if not stack:
return False
top = stack.pop()
if (char == ")" and top != "(") or (char == "]" and top != "[") or (char == "}" and top != "{"):
return False
return not stack
# Example usage
expression = "{[()]}"
print(is_balanced(expression)) # Output: True
Queues
1. Breadth-First Search (BFS)
pythonfrom collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
while queue:
node = queue.popleft()
print(node, end=" ")
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
# Example usage
graph = {0: [1, 2], 1: [2], 2: [0, 3], 3: [3]}
bfs(graph, 2) # Output: 2 0 3 1
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
while queue:
node = queue.popleft()
print(node, end=" ")
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
# Example usage
graph = {0: [1, 2], 1: [2], 2: [0, 3], 3: [3]}
bfs(graph, 2) # Output: 2 0 3 1
Trees
1. In-order Traversal
pythonclass TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def inorder_traversal(root):
if root:
inorder_traversal(root.left)
print(root.val, end=" ")
inorder_traversal(root.right)
# Example usage
root = TreeNode(1, None, TreeNode(2, TreeNode(3)))
inorder_traversal(root) # Output: 1 3 2
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def inorder_traversal(root):
if root:
inorder_traversal(root.left)
print(root.val, end=" ")
inorder_traversal(root.right)
# Example usage
root = TreeNode(1, None, TreeNode(2, TreeNode(3)))
inorder_traversal(root) # Output: 1 3 2
Graphs
1. Depth-First Search (DFS)
pythondef dfs(graph, start, visited=None):
if visited is None:
visited = set()
visited.add(start)
print(start, end=" ")
for neighbor in graph[start]:
if neighbor not in visited:
dfs(graph, neighbor, visited)
# Example usage
graph = {0: [1, 2], 1: [2], 2: [0, 3], 3: [3]}
dfs(graph, 2) # Output: 2 0 1 3
def dfs(graph, start, visited=None):
if visited is None:
visited = set()
visited.add(start)
print(start, end=" ")
for neighbor in graph[start]:
if neighbor not in visited:
dfs(graph, neighbor, visited)
# Example usage
graph = {0: [1, 2], 1: [2], 2: [0, 3], 3: [3]}
dfs(graph, 2) # Output: 2 0 1 3
Hashing
1. Collision Handling with Chaining
pythonclass HashTable:
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(size)]
def hash_function(self, key):
return key % self.size
def insert(self, key, value):
index = self.hash_function(key)
for pair in self.table[index]:
if pair[0] == key:
pair[1] = value
return
self.table[index].append([key, value])
def get(self, key):
index = self.hash_function(key)
for pair in self.table[index]:
if pair[0] == key:
return pair[1]
return None
# Example usage
hash_table = HashTable(10)
hash_table.insert(1, 'one')
hash_table.insert(2, 'two')
print(hash_table.get(1)) # Output: one
class HashTable:
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(size)]
def hash_function(self, key):
return key % self.size
def insert(self, key, value):
index = self.hash_function(key)
for pair in self.table[index]:
if pair[0] == key:
pair[1] = value
return
self.table[index].append([key, value])
def get(self, key):
index = self.hash_function(key)
for pair in self.table[index]:
if pair[0] == key:
return pair[1]
return None
# Example usage
hash_table = HashTable(10)
hash_table.insert(1, 'one')
hash_table.insert(2, 'two')
print(hash_table.get(1)) # Output: one
Heaps
1. Min-Heap Insert and Extract Min
pythonimport heapq
heap = []
heapq.heappush(heap, 10)
heapq.heappush(heap, 1)
heapq.heappush(heap, 5)
print(heapq.heappop(heap)) # Output: 1 (smallest element)
print(heap[0]) # Output: 5 (next smallest element)
import heapq
heap = []
heapq.heappush(heap, 10)
heapq.heappush(heap, 1)
heapq.heappush(heap, 5)
print(heapq.heappop(heap)) # Output: 1 (smallest element)
print(heap[0]) # Output: 5 (next smallest element)
Tries
1. Insert and Search
pythonclass TrieNode:
def __init__(self):
self.children = {}
self.is_end_of_word = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word):
current = self.root
for letter in word:
if letter not in current.children:
current.children[letter] = TrieNode()
current = current.children[letter]
current.is_end_of_word = True
def search(self, word):
current = self.root
for letter in word:
if letter not in current.children:
return False
current = current.children[letter]
return current.is_end_of_word
# Example usage
trie = Trie()
trie.insert("hello")
print(trie.search("hello")) # Output: True
print(trie.search("hell")) # Output: False
class TrieNode:
def __init__(self):
self.children = {}
self.is_end_of_word = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word):
current = self.root
for letter in word:
if letter not in current.children:
current.children[letter] = TrieNode()
current = current.children[letter]
current.is_end_of_word = True
def search(self, word):
current = self.root
for letter in word:
if letter not in current.children:
return False
current = current.children[letter]
return current.is_end_of_word
# Example usage
trie = Trie()
trie.insert("hello")
print(trie.search("hello")) # Output: True
print(trie.search("hell")) # Output: False
Comments
Post a Comment