Comprehensive answers to 200+ questions for 5-10 LPA companies. Master the fundamentals that get you hired.
Fundamental object-oriented programming concepts with examples.
Answer: A class is a blueprint or template for creating objects. An object is an instance of a class.
class Car {
String model;
int year;
}
Car myCar = new Car(); // myCar is an objectAnswer:
Answer: A constructor is a special method called when an object is created. It initializes the object. If you don't write one, Java provides a default no-argument constructor.
Answer: No, constructors don't have a return type, not even void. They return the instance of the class implicitly.
Answer: 'this' refers to the current object instance. Used to distinguish between instance variables and parameters.
class Person {
String name;
Person(String name) {
this.name = name; // 'this.name' is instance variable
}
}Answer: Inheritance allows a class to inherit properties and methods from another class.
class Animal {
void eat() { System.out.println("Eating"); }
}
class Dog extends Animal {
void bark() { System.out.println("Barking"); }
}Answer:
Answer: Yes, through method overloading (different parameters) or overriding (in subclasses).
Answer: A destructor cleans up resources when an object is destroyed. Java has automatic garbage collection, so you don't need to write destructors. Use finalize() method if needed.
Answer: Static means it belongs to the class, not instances. main() is static so JVM can call it without creating an object.
Understanding static vs instance members.
"Scope and accessibility of class members."
class Example:
static_var = "static"
def __init__(self):
self.instance_var = "instance"
@staticmethod
def static_method():
return Example.static_var # ✅ Works
# return self.instance_var # ❌ Error
def instance_method(self):
return self.instance_var # ✅ WorksStatic methods are called on the class itself and don't have access to 'this' or instance variables. This is why main() is static - it needs to run before any objects exist.
Create a static counter that tracks how many objects of a class have been created.
Immutability and inheritance control.
"Making classes, methods, or variables unchangeable."
# Python equivalent using naming convention
class Example:
CONSTANT = "final" # Convention: uppercase means final
def normal_method(self):
return "can be overridden"
# No direct equivalent to final methods in PythonFinal prevents modification or extension. Variables become constants, methods can't be overridden, and classes can't be inherited. This is similar to Python's naming conventions and frozen classes.
Design a class hierarchy where some methods are final and some are not.
Blueprint classes for inheritance.
"Classes that define structure but not complete implementation."
from abc import ABC, abstractmethod
class Animal(ABC): # Abstract class
@abstractmethod
def make_sound(self):
pass # No implementation
def breathe(self):
return "breathing" # Concrete method
class Dog(Animal):
def make_sound(self):
return "woof" # Must implement
# animal = Animal() # ❌ Error: can't instantiate
dog = Dog() # ✅ WorksAbstract classes define 'what' should be done, concrete classes define 'how'. They're essential for creating frameworks and ensuring subclasses implement required functionality.
Create an abstract Shape class with area() method, and concrete Circle and Rectangle classes.
Contracts for class behavior.
"Pure behavior specification without implementation."
from abc import ABC, abstractmethod
class Flyable(ABC): # Interface equivalent
@abstractmethod
def fly(self):
pass
class Swimmable(ABC):
@abstractmethod
def swim(self):
pass
class Duck(Flyable, Swimmable): # Multiple inheritance
def fly(self):
return "flying"
def swim(self):
return "swimming"Interfaces are like contracts - they specify what a class must do, but not how. Unlike abstract classes, interfaces support multiple inheritance and contain no implementation.
Design interfaces for Vehicle (start, stop) and MusicPlayer (play, pause) and implement in a CarWithStereo class.
Interface capabilities and limitations.
"What can and cannot be in an interface."
# Python protocols/interfaces don't have variables
from typing import Protocol
class Drawable(Protocol):
def draw(self) -> None: ...
# No variables in protocol itself
# Implementing classes define their own variablesModern interfaces can have default implementations and constants, but the core purpose remains behavior specification. This allows interface evolution without breaking existing implementations.
Create an interface with a default method and see how implementing classes can override or inherit it.
Hands-on OOP concepts with code examples.
Basic class design with encapsulation.
"Creating a simple banking system class."
class BankAccount:
def __init__(self, initial_balance=0):
self.__balance = initial_balance
def deposit(self, amount):
if amount > 0:
self.__balance += amount
def withdraw(self, amount):
if 0 < amount <= self.__balance:
self.__balance -= amount
return True
return False
def get_balance(self):
return self.__balanceThis demonstrates encapsulation by keeping balance private and providing controlled access through methods.
Add interest calculation method.
Inheritance control mechanisms.
"Making classes final or sealed."
# Python doesn't have direct equivalent
# Convention: use _ prefix for "private" but still inheritable
class FinalClass:
def __init__(self):
self._private = "can't inherit easily"
def __init_subclass__(cls, **kwargs):
raise TypeError("Cannot inherit from FinalClass")
# class Child(FinalClass): # ❌ Will raise TypeError
# passFinal classes prevent inheritance for security or design reasons. Private constructors prevent direct instantiation but allow controlled creation through factory methods.
Implement a Singleton class that cannot be inherited.
Parameter passing mechanisms.
"How arguments are passed to functions/methods."
def pass_by_value(x):
x = x + 1 # Changes local copy
print(f"Inside function: {x}")
def pass_by_reference(lst):
lst.append(4) # Changes original list
print(f"Inside function: {lst}")
num = 5
pass_by_value(num)
print(f"Outside function: {num}") # Still 5
my_list = [1, 2, 3]
pass_by_reference(my_list)
print(f"Outside function: {my_list}") # [1, 2, 3, 4]Understanding parameter passing is crucial for debugging. Python passes references to objects but copies immutable types. C++ gives explicit control with & for references.
Write a swap function that works correctly in both languages.
Memory management issues.
"When memory is not properly freed."
# Python has garbage collection, but circular references can cause leaks
class Node:
def __init__(self, value):
self.value = value
self.next = None
def create_circular_reference():
node1 = Node(1)
node2 = Node(2)
node1.next = node2
node2.next = node1 # Circular reference
# Without breaking the cycle, these objects won't be garbage collected
# even after function ends
return node1
# Better approach: use weak references for circular structures
import weakref
class BetterNode:
def __init__(self, value):
self.value = value
self.next = None # Use weakref.ref() if neededMemory leaks cause programs to consume increasing amounts of memory. In languages with manual memory management like C++, forgetting to delete/free is common. Python's garbage collector helps but circular references can still cause issues.
Write a program that demonstrates a memory leak and then fix it.
Reference vs content comparison.
"How to properly compare objects in Java."
# Python uses '==' for content comparison (like equals())
# 'is' is like '=='
class Person:
def __init__(self, name):
self.name = name
def __eq__(self, other): # Override == behavior
if isinstance(other, Person):
return self.name == other.name
return False
person1 = Person("John")
person2 = Person("John")
person3 = person1
print(person1 == person2) # True (content comparison)
print(person1 is person2) # False (different objects)
print(person1 is person3) # True (same object)'==' checks if two references point to the same memory location. '.equals()' (or overridden operators) checks if the contents are the same. Always override equals() for custom classes when you want content-based comparison.
Create a custom class and override equals() to compare based on multiple fields.
36. What is the difference between stack and heap memory? Stack for local variables, heap for objects.
37. What is garbage collection? Automatic memory management in Java.
38. What is exception handling? try-catch-finally blocks.
39. What is the difference between throw and throws? throw throws exception, throws declares it.
40. What is an ArrayList? Dynamic array implementation.
41. What is a LinkedList? Doubly linked list, good for insertions/deletions.
42. What is a HashSet? Set implementation using hash table, no duplicates.
43. What is the difference between Iterator and ListIterator? ListIterator can traverse backwards.
44. What is a Vector in Java? Thread-safe ArrayList.
45. What is the difference between Comparable and Comparator? Comparable for natural ordering, Comparator for custom.
46. What is serialization in Java? Converting object to byte stream.
47. What is the transient keyword? Excludes field from serialization.
48. What is the volatile keyword? Ensures visibility of changes across threads.
49. What is the synchronized keyword? Makes method/block thread-safe.
50. What is a deadlock? Two threads waiting for each other's resources.
Database query fundamentals and practical examples.
Answer: SQL (Structured Query Language) is used to manage relational databases. RDBMS stands for Relational Database Management System.
Answer: Primary Key uniquely identifies each record. No, it cannot be NULL.
Basic table creation.
"Creating a fundamental employee table."
-- SQL Query
CREATE TABLE Employee (
id INT PRIMARY KEY,
name VARCHAR(100),
salary DECIMAL(10,2)
);This creates a basic table structure. Always specify data types and constraints appropriately.
Add department_id as foreign key.
Basic INSERT operations.
"Adding multiple records to a database table."
-- SQL Query
INSERT INTO Employee (id, name, salary) VALUES
(1, 'John Doe', 50000),
(2, 'Jane Smith', 55000),
(3, 'Bob Johnson', 48000);INSERT statements add new records to tables. Always specify column names for clarity and to avoid issues if table structure changes. Multiple records can be inserted in one statement for efficiency.
Insert records with some NULL values and see how it affects queries.
Basic UPDATE operations.
"Modifying existing records in a database."
-- SQL Query
UPDATE Employee
SET salary = 60000
WHERE id = 5;UPDATE modifies existing data. Always use WHERE clause to avoid updating all records accidentally. Without WHERE, all rows in the table would be updated.
Update salaries for all employees in a specific department with a percentage increase.
Basic DELETE operations.
"Removing records from a database table."
-- SQL Query
DELETE FROM Employee
WHERE salary < 20000;DELETE permanently removes records. Always use WHERE clause to specify which records to delete. Without WHERE, all records in the table would be deleted. Consider using SELECT first to verify which records will be affected.
Delete employees who haven't logged in for 2 years (assuming you have a last_login column).
Combining data from multiple tables.
"Retrieving related data from two tables."
-- SQL Query
SELECT e.name, d.department_name
FROM Employee e
INNER JOIN Department d ON e.department_id = d.id;INNER JOIN returns only records that have matching values in both tables. If an employee has no department or a department has no employees, those records won't appear in the result.
Join three tables: Employee, Department, and Project to show employees working on projects.
Practical JOIN usage.
"Getting employee information with department details."
-- SQL Query using INNER JOIN
SELECT e.id, e.name, e.salary, d.department_name
FROM Employee e
INNER JOIN Department d ON e.department_id = d.id;This query combines employee personal information with organizational structure. INNER JOIN ensures only employees with valid departments are shown. LEFT JOIN would include employees without departments.
Add a WHERE clause to show only employees from 'Engineering' department.
GROUP BY and aggregate functions.
"Summarizing data by categories."
-- SQL Query
SELECT d.department_name, COUNT(e.id) as employee_count
FROM Department d
LEFT JOIN Employee e ON d.id = e.department_id
GROUP BY d.id, d.department_name;GROUP BY creates groups of records with the same value. Aggregate functions like COUNT work on each group. LEFT JOIN ensures departments with 0 employees are included.
Find departments with average salary above 60000.
Subqueries and MAX functions.
"Finding nth highest value without LIMIT."
-- Method 1: Using subquery
SELECT MAX(salary) as second_highest
FROM Employee
WHERE salary < (SELECT MAX(salary) FROM Employee);This finds the second highest by excluding the absolute highest value. The subquery approach is more efficient. For nth highest, you can modify the condition accordingly.
Find the third highest salary using the same approach.
Web technologies and general programming concepts.
Web communication protocols.
"How browsers and servers communicate."
# Python HTTP request
import requests
# HTTP (insecure)
response = requests.get('http://example.com')
print(f"Status: {response.status_code}")
# HTTPS (secure)
response = requests.get('https://example.com')
print(f"Secure connection: {response.url.startswith('https://')}")HTTP is the foundation of web communication. HTTPS adds encryption to protect sensitive data like passwords and credit card information. Modern websites should always use HTTPS.
Check if a website supports HTTP/2 using browser developer tools.
HTTP request types.
"Different ways to interact with web servers."
import requests
# GET request - retrieve data
response = requests.get('https://api.example.com/users')
print(f"GET status: {response.status_code}")
print(f"Data: {response.json()}")
# POST request - send data
user_data = {'name': 'John', 'email': 'john@example.com'}
response = requests.post('https://api.example.com/users',
json=user_data)
print(f"POST status: {response.status_code}")
print(f"Created user: {response.json()}")GET requests are safe and can be cached/bookmarked. POST requests modify server state and should not be repeated. Understanding these is crucial for REST API design.
Implement a simple REST API client that supports CRUD operations.
Algorithm analysis fundamentals.
"Measuring algorithm efficiency."
# O(1) - Constant time
def get_first_element(arr):
return arr[0] if arr else None
# O(n) - Linear time
def find_max(arr):
max_val = arr[0]
for num in arr: # Visits each element once
if num > max_val:
max_val = num
return max_val
# O(n²) - Quadratic time
def bubble_sort(arr):
for i in range(len(arr)):
for j in range(len(arr) - i - 1): # Nested loops
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
return arrTime complexity helps predict how an algorithm will perform with large inputs. O(1) is ideal, O(n²) algorithms become unusable for large datasets. Understanding this guides algorithm selection.
Analyze the time complexity of finding duplicates in an array using different approaches.
Specific complexity classes.
"Understanding common time complexities."
# O(1) examples
def get_element_by_index(arr, index):
return arr[index] # Same time regardless of array size
def check_if_empty(arr):
return len(arr) == 0 # Constant time operation
# O(n) examples
def find_element_linear(arr, target):
for item in arr: # Time grows with array size
if item == target:
return True
return False
def sum_array(arr):
total = 0
for num in arr: # Must visit each element
total += num
return totalO(1) operations are the fastest - their performance doesn't degrade with size. O(n) operations are acceptable for most applications but can be slow for very large datasets.
Identify O(1) vs O(n) operations in common programming tasks.
Practical programming problems with solutions.
Basic conditional logic.
"Simple number classification."
def check_even_odd(n):
if n % 2 == 0:
print(f"{n} is even")
else:
print(f"{n} is odd")
# Example usage
check_even_odd(5) # Output: 5 is oddUses modulo operator to check divisibility by 2. This is a fundamental conditional statement.
Handle negative numbers.
Multiple condition comparison.
"Finding maximum among three values."
def find_largest(a, b, c):
if a >= b and a >= c:
return a
elif b >= a and b >= c:
return b
else:
return c
# Example usage
print(find_largest(10, 25, 15)) # Output: 25Uses logical AND operators to compare all combinations. This approach scales to more numbers.
Find largest of n numbers using array.
String manipulation basics.
"Reversing character order in a string."
def reverse_string(s):
return s[::-1]
# Example usage
print(reverse_string("hello")) # Output: ollehPython uses slicing, C++ uses std::reverse algorithm. Both are efficient for string reversal.
Reverse words in a sentence.
String reversal and comparison.
"Checking if a string reads the same forwards and backwards."
def is_palindrome(s):
# Remove spaces and convert to lowercase for case-insensitive check
cleaned = ''.join(c.lower() for c in s if c.isalnum())
return cleaned == cleaned[::-1]
# Example usage
print(is_palindrome("A man, a plan, a canal: Panama")) # True
print(is_palindrome("race a car")) # FalseA palindrome reads the same forwards and backwards. The key is cleaning the string (removing punctuation, spaces, and handling case) before comparison. Python's slicing makes this elegant.
Check if a number is a palindrome (without converting to string).
Recursive and iterative solutions.
"Calculating n! (n factorial)."
def factorial_iterative(n):
if n < 0:
raise ValueError("Factorial not defined for negative numbers")
result = 1
for i in range(1, n + 1):
result *= i
return result
def factorial_recursive(n):
if n < 0:
raise ValueError("Factorial not defined for negative numbers")
if n == 0 or n == 1:
return 1
return n * factorial_recursive(n - 1)
# Example usage
print(factorial_iterative(5)) # 120
print(factorial_recursive(5)) # 120Factorial grows very quickly, so use appropriate data types (long long in C++). Iterative solution is better to avoid stack overflow. Both solutions handle the base cases properly.
Calculate factorial using big integer libraries for large numbers.
Sequence generation with multiple approaches.
"Generating the Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8..."
def fibonacci_iterative(n):
if n <= 0:
return []
elif n == 1:
return [0]
elif n == 2:
return [0, 1]
fib_sequence = [0, 1]
for i in range(2, n):
next_fib = fib_sequence[i-1] + fib_sequence[i-2]
fib_sequence.append(next_fib)
return fib_sequence
def fibonacci_recursive(n):
if n <= 0:
return []
elif n == 1:
return [0]
elif n == 2:
return [0, 1]
else:
fib = fibonacci_recursive(n-1)
fib.append(fib[-1] + fib[-2])
return fib
# Example usage
print(fibonacci_iterative(10)) # [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
print(fibonacci_recursive(10)) # [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]Fibonacci sequence is fundamental in algorithm analysis. The iterative approach is more efficient and avoids recursion limits. Use long long for larger terms to prevent overflow.
Find the nth Fibonacci number using matrix exponentiation for O(log n) time.
String iteration and character checking.
"Counting vowel characters (a, e, i, o, u) in text."
def count_vowels(s):
vowels = 'aeiouAEIOU'
count = 0
for char in s:
if char in vowels:
count += 1
return count
# Example usage
print(count_vowels("Hello World")) # 3 (e, o, o)
print(count_vowels("Programming")) # 3 (o, a, i)This demonstrates basic string processing. The approach is straightforward: check each character against a set of vowels. Consider whether to count both uppercase and lowercase.
Count consonants instead of vowels, or create a frequency map of all characters.
Prime number detection algorithm.
"Determining if a number has no positive divisors other than 1 and itself."
import math
def is_prime(n):
if n <= 1:
return False
if n <= 3:
return True
if n % 2 == 0 or n % 3 == 0:
return False
# Check divisibility up to √n
for i in range(5, int(math.sqrt(n)) + 1, 6):
if n % i == 0 or n % (i + 2) == 0:
return False
return True
# Example usage
print(is_prime(17)) # True
print(is_prime(18)) # False
print(is_prime(23)) # TrueThe √n optimization reduces checks significantly. For n=100, we check up to 10 instead of 99. The +6 pattern skips multiples of 2 and 3 for additional efficiency.
Implement the Sieve of Eratosthenes to find all primes up to n.
Array traversal basics.
"Finding maximum value in an array."
def find_max(arr):
if not arr: return None
max_val = arr[0]
for num in arr:
if num > max_val:
max_val = num
return max_val
# Example usage
print(find_max([3, 7, 2, 9, 1])) # Output: 9Simple linear search through array. This is the basic pattern for finding min/max in collections.
Find both min and max in single pass.
Multiple variable tracking.
"Finding runner-up maximum value."
def second_largest(arr):
if len(arr) < 2:
return None
first = second = float('-inf')
for num in arr:
if num > first:
second = first
first = num
elif num > second and num != first:
second = num
return second if second != float('-inf') else None
# Example usage
print(second_largest([3, 7, 2, 9, 1])) # Output: 7
print(second_largest([9, 9, 8])) # Output: 8Maintains two variables to track the largest and second largest. Handles duplicates correctly by checking num != first.
Find kth largest element using sorting or heap.
In-place array manipulation.
"Reversing elements without extra space."
def reverse_array(arr):
left, right = 0, len(arr) - 1
while left < right:
arr[left], arr[right] = arr[right], arr[left]
left += 1
right -= 1
return arr
# Example usage
arr = [1, 2, 3, 4, 5]
reverse_array(arr)
print(arr) # Output: [5, 4, 3, 2, 1]Two-pointer technique swaps elements from outside in. This is O(n) time and O(1) extra space, much better than creating a new array.
Reverse only part of the array (between given indices).
Sequential comparison.
"Verifying if array elements are in ascending order."
def is_sorted(arr):
if not arr:
return True # Empty array is considered sorted
for i in range(1, len(arr)):
if arr[i] < arr[i-1]:
return False
return True
# Example usage
print(is_sorted([1, 2, 3, 4, 5])) # True
print(is_sorted([1, 3, 2, 4, 5])) # FalseSimple linear scan checking adjacent elements. This assumes ascending order - modify the comparison for descending order.
Check if array is sorted in descending order.
In-place deduplication.
"Removing duplicate elements while preserving order."
def remove_duplicates(arr):
if not arr:
return 0
slow = 0
for fast in range(1, len(arr)):
if arr[fast] != arr[slow]:
slow += 1
arr[slow] = arr[fast]
return slow + 1 # Length of unique elements
# Example usage
arr = [1, 1, 2, 2, 3, 4, 4, 5]
length = remove_duplicates(arr)
print(arr[:length]) # [1, 2, 3, 4, 5]Two-pointer technique maintains relative order while removing duplicates in-place. This is efficient for sorted arrays and preserves the original order.
Remove duplicates allowing at most two occurrences of each element.
String manipulation and splitting.
"Counting words in a sentence or paragraph."
def count_words(text):
if not text.strip():
return 0
return len(text.split())
# Example usage
print(count_words("Hello world")) # 2
print(count_words("This is a test sentence.")) # 5
print(count_words(" ")) # 0Python's split() method handles multiple spaces automatically. C++ uses stringstream to extract words, which also handles whitespace correctly.
Count words ignoring punctuation marks.
Character frequency counting.
"Checking if two strings contain the same characters with same frequencies."
from collections import Counter
def are_anagrams(s1, s2):
# Remove spaces and convert to lowercase
s1 = ''.join(s1.split()).lower()
s2 = ''.join(s2.split()).lower()
return Counter(s1) == Counter(s2)
# Example usage
print(are_anagrams("listen", "silent")) # True
print(are_anagrams("hello", "world")) # False
print(are_anagrams("Astronomer", "Moon starer")) # TrueAnagrams have the same character frequencies. Counter in Python makes this trivial. In C++, we use a frequency map and check if all counts are zero after incrementing and decrementing.
Find all anagrams of a word from a list of words.
Essential OS concepts for technical interviews.
Resource allocation and synchronization.
"Deadlock occurs when two or more processes are waiting for resources held by each other."
# Deadlock example (conceptual)
import threading
import time
lock1 = threading.Lock()
lock2 = threading.Lock()
def thread1():
lock1.acquire()
print("Thread 1: Got lock 1")
time.sleep(1)
lock2.acquire() # Waits for lock 2
print("Thread 1: Got lock 2")
lock2.release()
lock1.release()
def thread2():
lock2.acquire()
print("Thread 2: Got lock 2")
time.sleep(1)
lock1.acquire() # Waits for lock 1
print("Thread 2: Got lock 1")
lock1.release()
lock2.release()
# This will cause deadlock
t1 = threading.Thread(target=thread1)
t2 = threading.Thread(target=thread2)
t1.start()
t2.start()Deadlock requires four conditions: mutual exclusion, hold and wait, no preemption, and circular wait. The example shows two threads acquiring locks in different orders, creating a circular dependency.
How would you prevent this deadlock?
Deadlock prevention strategies.
"Coffman conditions that must all be present for deadlock to occur."
# Deadlock prevention strategies
"""
1. Mutual Exclusion - Can't prevent (some resources are inherently non-shareable)
2. Hold and Wait - Require processes to request all resources at once
3. No Preemption - Allow preemption of resources
4. Circular Wait - Impose ordering on resource acquisition
"""
# Example of ordered resource acquisition (prevents circular wait)
import threading
lock1 = threading.Lock()
lock2 = threading.Lock()
def safe_thread():
# Always acquire locks in same order
lock1.acquire()
print("Got lock 1")
lock2.acquire()
print("Got lock 2")
# Critical section
print("Doing work...")
lock2.release()
lock1.release()To prevent deadlock, you need to break at least one of these four conditions. The most practical approaches are resource ordering and avoiding hold-and-wait.
Implement banker's algorithm for deadlock avoidance.
Memory management concepts.
"Technique that gives processes the illusion of large contiguous memory."
# Virtual memory simulation (conceptual)
class VirtualMemory:
def __init__(self, physical_pages=4):
self.physical_pages = physical_pages
self.page_table = {} # virtual -> physical mapping
self.disk = {} # swapped out pages
def access(self, virtual_addr):
if virtual_addr in self.page_table:
return f"Physical page: {self.page_table[virtual_addr]}"
elif virtual_addr in self.disk:
# Page fault - load from disk
return f"Page fault! Loading from disk..."
else:
return "Segmentation fault!"
vm = VirtualMemory()
print(vm.access(0x1000)) # Page fault
vm.page_table[0x1000] = 0 # Map to physical page 0
print(vm.access(0x1000)) # HitVirtual memory allows programs to use more memory than physically available. When a page isn't in physical memory, a page fault occurs and the page is loaded from disk.
Explain the difference between paging and segmentation.
Concurrency fundamentals.
"Processes are independent execution units, threads are lightweight subprocesses."
import multiprocessing
import threading
import time
def worker(name):
print(f"{name} starting")
time.sleep(1)
print(f"{name} finishing")
# Process example (separate memory)
if __name__ == "__main__":
p = multiprocessing.Process(target=worker, args=("Process",))
p.start()
p.join()
# Thread example (shared memory)
t = threading.Thread(target=worker, args=("Thread",))
t.start()
t.join()Processes are isolated and communicate via pipes/sockets. Threads share memory but need synchronization. Use processes for isolation, threads for performance.
When would you use multiprocessing vs multithreading?
Synchronization primitives.
"Variable that controls access to shared resources."
import threading
import time
# Binary semaphore (mutex)
semaphore = threading.Semaphore(1) # Only 1 process at a time
def critical_section(name):
semaphore.acquire()
try:
print(f"{name} in critical section")
time.sleep(1)
print(f"{name} leaving critical section")
finally:
semaphore.release()
# Usage
t1 = threading.Thread(target=critical_section, args=("Thread 1",))
t2 = threading.Thread(target=critical_section, args=("Thread 2",))
t1.start()
t2.start()
t1.join()
t2.join()Semaphores prevent race conditions by controlling concurrent access. Binary semaphores (0-1) work like mutexes, counting semaphores allow multiple concurrent accesses.
Implement producer-consumer problem using semaphores.
Memory management technique.
"Dividing memory into fixed-size pages for efficient allocation."
# Simple paging simulation
class PageTable:
def __init__(self, page_size=4096):
self.page_size = page_size
self.table = {} # page_number -> frame_number
def translate_address(self, logical_addr):
page_number = logical_addr // self.page_size
offset = logical_addr % self.page_size
if page_number in self.table:
frame_number = self.table[page_number]
physical_addr = frame_number * self.page_size + offset
return f"Physical address: {physical_addr}"
else:
return "Page fault!"
def map_page(self, page_num, frame_num):
self.table[page_num] = frame_num
pt = PageTable()
pt.map_page(0, 5) # Page 0 -> Frame 5
print(pt.translate_address(0)) # Physical: 0
print(pt.translate_address(4096)) # Physical: 20480 (5*4096)
print(pt.translate_address(8192)) # Page faultPaging eliminates external fragmentation by using fixed-size pages. The page table translation happens in hardware (MMU) for performance.
Explain how TLB (Translation Lookaside Buffer) improves paging performance.
Concurrency bugs.
"When multiple threads access shared data simultaneously, leading to inconsistent results."
import threading
counter = 0
def increment():
global counter
for _ in range(100000):
# Race condition: read-modify-write is not atomic
temp = counter
temp += 1
counter = temp
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=increment)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Expected: 200000, Actual: {counter}") # Usually less due to race conditionRace conditions occur when operations that should be atomic are split into multiple steps. Use locks or atomic operations to prevent them.
Fix this race condition using a mutex.
Process scheduling.
"Saving current process state and loading another process's state."
# Conceptual context switch
class Process:
def __init__(self, pid, state="ready"):
self.pid = pid
self.state = state
self.registers = {"PC": 0, "SP": 0} # Program counter, stack pointer
def save_context(self):
print(f"Saving context for process {self.pid}")
# In reality, this saves to PCB (Process Control Block)
def load_context(self):
print(f"Loading context for process {self.pid}")
# In reality, this loads from PCB
def context_switch(from_process, to_process):
print(f"Context switch: {from_process.pid} -> {to_process.pid}")
from_process.save_context()
to_process.load_context()
p1 = Process(1, "running")
p2 = Process(2, "ready")
context_switch(p1, p2)Context switching is expensive because it involves saving/restoring CPU registers, memory mappings, and cache flushing. This is why threads (which share address space) have faster context switches than processes.
Explain why thread context switches are faster than process context switches.
Scheduling fairness.
"When a process never gets CPU time due to scheduling policy."
# Starvation example with priority scheduling
import threading
import time
high_priority_event = threading.Event()
low_priority_counter = 0
def high_priority_task():
while True:
high_priority_event.set() # Signal completion
time.sleep(0.01) # Short burst
def low_priority_task():
global low_priority_counter
while low_priority_counter < 10:
if high_priority_event.wait(timeout=0.1): # Wait for signal
low_priority_counter += 1
print(f"Low priority task: {low_priority_counter}")
high_priority_event.clear()
# High priority thread runs continuously
# Low priority thread may never get CPU time
t_high = threading.Thread(target=high_priority_task, daemon=True)
t_low = threading.Thread(target=low_priority_task)
t_high.start()
t_low.start()
t_low.join(timeout=5) # May timeout due to starvationStarvation is solved with aging (gradually increasing priority of waiting processes) or round-robin scheduling. It's different from deadlock because starved processes are ready but never selected.
How does aging prevent starvation in priority scheduling?
Memory performance issue.
"When system spends more time paging than executing."
# Thrashing simulation (conceptual)
class MemoryManager:
def __init__(self, total_frames=10):
self.total_frames = total_frames
self.page_faults = 0
def access_page(self, page, process_frames):
if page not in process_frames:
self.page_faults += 1
if len(process_frames) >= self.total_frames // 2: # Thrashing condition
print(f"THRASHING: High page fault rate! Faults: {self.page_faults}")
# System would start swapping processes
return page in process_frames
# Simulate multiple processes with insufficient memory
mm = MemoryManager()
processes = [{"frames": [1, 2]} for _ in range(8)] # 8 processes, 10 frames total
for i in range(100):
for p in processes:
mm.access_page(3, p["frames"]) # Page 3 not in memory - page faultThrashing occurs when the working set of all processes exceeds physical memory. Solutions include reducing multiprogramming level or increasing RAM.
How does the working set model help prevent thrashing?
Advanced OOP concepts and practical implementations.
Immutability and inheritance control.
"The final keyword prevents modification or extension."
# Python equivalent using naming conventions and properties
class FinalClass:
def __init__(self):
self._value = 42
@property
def value(self):
return self._value
# No setter = final-like behavior
# Usage
obj = FinalClass()
print(obj.value) # 42
# obj.value = 50 # AttributeErrorFinal prevents inheritance (classes), overriding (methods), and reassignment (variables). It's used for security, performance, and design clarity.
Create a class with final methods and show inheritance attempt.
Class-level methods vs instance methods.
"Static methods belong to the class, not instances."
class MathUtils:
@staticmethod
def add(x, y):
return x + y
@staticmethod
def factorial(n):
if n <= 1:
return 1
return n * MathUtils.factorial(n - 1)
# Usage
print(MathUtils.add(5, 3)) # 8
print(MathUtils.factorial(5)) # 120Static methods are utility functions that don't need object state. They're called on the class itself and are shared across all instances.
Create a static counter that tracks total objects created.
Accessing private members from outside the class.
"Friend functions can access private/protected members."
# Python doesn't have friends, but similar with underscore convention
class MyClass:
def __init__(self):
self.__private = 42
def get_private(self):
return self.__private
def friend_function(obj):
# In Python, we respect privacy but can access via name mangling
return obj._MyClass__private
obj = MyClass()
print(friend_function(obj)) # 42Friend functions break encapsulation for specific functions that need access. Use sparingly as it violates OOP principles.
Create two classes where one is friend of another.
Hash table implementation and collision handling.
"HashMap uses hashing for O(1) average case operations."
# Python dict implementation concept
class SimpleHashMap:
def __init__(self, size=10):
self.size = size
self.buckets = [[] for _ in range(size)]
def _hash(self, key):
return hash(key) % self.size
def put(self, key, value):
index = self._hash(key)
bucket = self.buckets[index]
# Check if key exists
for i, (k, v) in enumerate(bucket):
if k == key:
bucket[i] = (key, value)
return
bucket.append((key, value))
def get(self, key):
index = self._hash(key)
bucket = self.buckets[index]
for k, v in bucket:
if k == key:
return v
return None
# Usage
hm = SimpleHashMap()
hm.put("name", "John")
print(hm.get("name")) # JohnHashMap uses hash function to map keys to indices. Collisions are handled by storing multiple entries in same bucket (separate chaining).
Implement collision resolution using linear probing.
Error handling and recovery mechanisms.
"Exceptions handle runtime errors gracefully."
def divide(a, b):
try:
result = a / b
return result
except ZeroDivisionError as e:
print(f"Error: {e}")
return None
except TypeError as e:
print(f"Type error: {e}")
return None
finally:
print("Division operation completed")
# Usage
print(divide(10, 2)) # 5.0
print(divide(10, 0)) # Error: division by zero
print(divide(10, "a")) # Type errorExceptions allow clean error handling without checking return codes everywhere. They separate error-handling from normal logic.
Create custom exception class and handle it.
Object comparison interfaces.
"Two ways to define object ordering in Java."
# Python equivalent using key functions and functools
from functools import total_ordering
@total_ordering
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __eq__(self, other):
return self.age == other.age
def __lt__(self, other):
return self.age < other.age # Natural ordering by age
# Custom comparator (key function)
def by_name(person):
return person.name
people = [Person("Bob", 25), Person("Alice", 30), Person("Charlie", 20)]
# Natural ordering (by age)
print("By age:", [p.name for p in sorted(people)])
# Custom ordering (by name)
print("By name:", [p.name for p in sorted(people, key=by_name)])Comparable defines natural ordering within the class. Comparator allows multiple comparison strategies without modifying the class.
Sort objects by multiple criteria using Comparator chain.
Object persistence and network transfer.
"Converting objects to byte streams and back."
import pickle
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __str__(self):
return f"{self.name}, {self.age}"
# Serialization
person = Person("Alice", 30)
with open('person.pkl', 'wb') as f:
pickle.dump(person, f)
# Deserialization
with open('person.pkl', 'rb') as f:
loaded_person = pickle.load(f)
print(loaded_person) # Alice, 30Serialization converts object state to a format that can be stored or transmitted. Deserialization recreates the object from that format.
Serialize a complex object graph with references.
Concurrent execution within a process.
"Threads allow multiple execution paths in same process."
import threading
import time
def worker(name, delay):
print(f"Worker {name} starting")
time.sleep(delay)
print(f"Worker {name} finished")
# Create threads
thread1 = threading.Thread(target=worker, args=("A", 2))
thread2 = threading.Thread(target=worker, args=("B", 1))
# Start threads
thread1.start()
thread2.start()
# Wait for completion
thread1.join()
thread2.join()
print("All workers done")Threads run concurrently and share the same memory space. They're lighter than processes but require synchronization to avoid race conditions.
Create producer-consumer problem with threads.
Advanced SQL queries and database concepts.
Basic SELECT statement syntax.
"Retrieving all data from a database table."
import sqlite3
# Create and populate table
conn = sqlite3.connect(':memory:')
cursor = conn.cursor()
cursor.execute('''
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT,
salary REAL,
department TEXT
)
''')
# Insert sample data
cursor.executemany('''
INSERT INTO employees (name, salary, department)
VALUES (?, ?, ?)
''', [
('Alice', 75000, 'Engineering'),
('Bob', 65000, 'Sales'),
('Charlie', 80000, 'Engineering')
])
# Select all records
cursor.execute('SELECT * FROM employees')
rows = cursor.fetchall()
print("All employees:")
for row in rows:
print(row)
conn.close()SELECT * FROM table_name retrieves all columns and rows. Use specific column names instead of * in production for better performance.
Select specific columns with aliases.
Filtering records based on conditions.
"Retrieving specific records that match criteria."
import sqlite3
conn = sqlite3.connect(':memory:')
cursor = conn.cursor()
# Create and populate table
cursor.execute('CREATE TABLE products (id INTEGER, name TEXT, price REAL, category TEXT)')
cursor.executemany('INSERT INTO products VALUES (?, ?, ?, ?)', [
(1, 'Laptop', 1200, 'Electronics'),
(2, 'Book', 25, 'Education'),
(3, 'Phone', 800, 'Electronics'),
(4, 'Chair', 150, 'Furniture')
])
# WHERE examples
print("Electronics products:")
cursor.execute('SELECT * FROM products WHERE category = ?', ('Electronics',))
for row in cursor.fetchall():
print(row)
print("\nExpensive products (> $500):")
cursor.execute('SELECT name, price FROM products WHERE price > 500')
for row in cursor.fetchall():
print(row)
conn.close()WHERE clause filters rows based on conditions. Multiple conditions can be combined with AND/OR. Always use parameterized queries to prevent SQL injection.
Use WHERE with multiple conditions using AND/OR.
Sorting query results.
"Ordering results by one or more columns."
import sqlite3
conn = sqlite3.connect(':memory:')
cursor = conn.cursor()
cursor.execute('CREATE TABLE students (id INTEGER, name TEXT, grade REAL, subject TEXT)')
cursor.executemany('INSERT INTO students VALUES (?, ?, ?, ?)', [
(1, 'Alice', 95, 'Math'),
(2, 'Bob', 87, 'Math'),
(3, 'Charlie', 92, 'Science'),
(4, 'Diana', 88, 'Math')
])
print("Students ordered by grade (descending):")
cursor.execute('SELECT name, grade, subject FROM students ORDER BY grade DESC')
for row in cursor.fetchall():
print(row)
print("\nStudents ordered by subject, then grade:")
cursor.execute('SELECT name, grade, subject FROM students ORDER BY subject ASC, grade DESC')
for row in cursor.fetchall():
print(row)
conn.close()ORDER BY sorts results after filtering. Multiple columns create hierarchical sorting. Default is ASC, use DESC for descending.
Order by calculated expression (e.g., price * quantity).
Grouping rows for aggregate functions.
"Grouping records by column values for summary operations."
import sqlite3
conn = sqlite3.connect(':memory:')
cursor = conn.cursor()
cursor.execute('CREATE TABLE sales (id INTEGER, product TEXT, amount REAL, region TEXT)')
cursor.executemany('INSERT INTO sales VALUES (?, ?, ?, ?)', [
(1, 'Laptop', 1200, 'North'),
(2, 'Phone', 800, 'South'),
(3, 'Laptop', 1100, 'North'),
(4, 'Tablet', 500, 'South'),
(5, 'Phone', 900, 'North')
])
print("Total sales by product:")
cursor.execute('''
SELECT product, COUNT(*) as count, SUM(amount) as total
FROM sales
GROUP BY product
''')
for row in cursor.fetchall():
print(row)
print("\nAverage sales by region:")
cursor.execute('''
SELECT region, AVG(amount) as average, COUNT(*) as transactions
FROM sales
GROUP BY region
''')
for row in cursor.fetchall():
print(row)
conn.close()GROUP BY creates groups of rows with same values. Aggregate functions operate on each group. Non-grouped columns in SELECT must use aggregates.
Group by multiple columns and use HAVING clause.
Advanced aggregation over result sets.
"Performing calculations across related rows without grouping."
import sqlite3
conn = sqlite3.connect(':memory:')
cursor = conn.cursor()
# SQLite doesn't have window functions, so we'll simulate with Python
cursor.execute('CREATE TABLE employees (name TEXT, department TEXT, salary REAL)')
cursor.executemany('INSERT INTO employees VALUES (?, ?, ?)', [
('Alice', 'Engineering', 90000),
('Bob', 'Engineering', 80000),
('Charlie', 'Sales', 70000),
('Diana', 'Sales', 75000),
('Eve', 'Engineering', 85000)
])
# Simulate window functions with Python
cursor.execute('SELECT * FROM employees ORDER BY department, salary DESC')
rows = cursor.fetchall()
print("Employees with department running totals:")
current_dept = None
dept_total = 0
for row in rows:
name, dept, salary = row
if dept != current_dept:
current_dept = dept
dept_total = 0
dept_total += salary
print(f"{name} ({dept}): ${salary} | Dept Total: ${dept_total}")
conn.close()Window functions perform calculations across a set of rows related to the current row. They're powerful for running totals, rankings, and moving averages.
Calculate running totals and moving averages.
Web technologies and system design fundamentals.
Representational State Transfer architecture.
"Standard for web service communication."
# Flask REST API example
from flask import Flask, jsonify, request
app = Flask(__name__)
users = [
{"id": 1, "name": "Alice", "email": "alice@example.com"},
{"id": 2, "name": "Bob", "email": "bob@example.com"}
]
@app.route('/users', methods=['GET'])
def get_users():
return jsonify(users)
@app.route('/users/<int:user_id>', methods=['GET'])
def get_user(user_id):
user = next((u for u in users if u['id'] == user_id), None)
if user:
return jsonify(user)
return jsonify({"error": "User not found"}), 404
@app.route('/users', methods=['POST'])
def create_user():
new_user = request.get_json()
new_user['id'] = len(users) + 1
users.append(new_user)
return jsonify(new_user), 201
if __name__ == '__main__':
app.run(debug=True)REST APIs use HTTP methods (GET, POST, PUT, DELETE) to manipulate resources. Each resource has a unique URI and operations are stateless.
Implement PUT and DELETE methods for the API.
JavaScript Object Notation format.
"Lightweight data interchange format."
import json
# Python dict to JSON
person = {
"name": "Alice",
"age": 30,
"skills": ["Python", "JavaScript", "SQL"],
"address": {
"street": "123 Main St",
"city": "Anytown",
"country": "USA"
}
}
# Serialize to JSON string
json_string = json.dumps(person, indent=2)
print("JSON string:")
print(json_string)
# Deserialize from JSON string
parsed_person = json.loads(json_string)
print("\nParsed back:")
print(f"Name: {parsed_person['name']}")
print(f"Skills: {', '.join(parsed_person['skills'])}")
# Working with files
with open('person.json', 'w') as f:
json.dump(person, f, indent=2)
with open('person.json', 'r') as f:
loaded_person = json.load(f)
print(f"\nLoaded from file: {loaded_person['name']}")JSON is a text format for storing and transporting data. It's self-describing and easy to understand, making it ideal for APIs and configuration files.
Parse a complex nested JSON structure and extract specific values.
State management in web applications.
"Maintaining user state across HTTP requests."
# Flask session and cookie example
from flask import Flask, session, request, make_response
app = Flask(__name__)
app.secret_key = 'your-secret-key-here' # Required for sessions
@app.route('/login')
def login():
# Set session data
session['user_id'] = 123
session['username'] = 'alice'
# Create response with cookie
resp = make_response("Logged in successfully")
resp.set_cookie('last_login', '2024-01-15')
return resp
@app.route('/profile')
def profile():
# Access session data
if 'user_id' in session:
user_id = session['user_id']
username = session['username']
return f"Welcome {username} (ID: {user_id})"
else:
return "Please login first"
@app.route('/logout')
def logout():
# Clear session
session.clear()
# Clear cookie
resp = make_response("Logged out")
resp.set_cookie('last_login', '', expires=0)
return resp
@app.route('/check-cookie')
def check_cookie():
# Read cookie
last_login = request.cookies.get('last_login')
if last_login:
return f"Last login: {last_login}"
else:
return "No last login cookie found"
if __name__ == '__main__':
app.run(debug=True)Sessions store data on the server, cookies on the client. Session IDs in cookies link the two. Sessions are more secure for sensitive data.
Implement session timeout and automatic cleanup.
Standard response codes for web requests.
"Indicating success, errors, and redirects."
# Flask HTTP status codes example
from flask import Flask, jsonify, request, abort
app = Flask(__name__)
users = {"alice": "password123", "bob": "password456"}
@app.route('/login', methods=['POST'])
def login():
data = request.get_json()
if not data or 'username' not in data or 'password' not in data:
return jsonify({"error": "Missing username or password"}), 400
username = data['username']
password = data['password']
if username not in users:
return jsonify({"error": "User not found"}), 404
if users[username] != password:
return jsonify({"error": "Invalid password"}), 401
return jsonify({"message": "Login successful", "user": username}), 200
@app.route('/users/<username>')
def get_user(username):
if username not in users:
abort(404, description="User not found")
return jsonify({"username": username}), 200
@app.route('/admin')
def admin():
# Simulate unauthorized access
return jsonify({"error": "Admin access required"}), 403
@app.route('/redirect')
def redirect_example():
return jsonify({"message": "This endpoint moved"}), 301
@app.route('/server-error')
def server_error():
# Simulate server error
return jsonify({"error": "Internal server error"}), 500
if __name__ == '__main__':
app.run(debug=True)HTTP status codes provide standardized way to indicate request outcomes. 2xx for success, 4xx for client errors, 5xx for server errors.
Implement a function that returns appropriate status codes for different scenarios.
Fundamental data structures and their use cases.
Linear data structure trade-offs.
"Choosing between contiguous and linked storage."
# Array (list) vs Linked List simulation
class Node:
def __init__(self, data):
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def append(self, data):
if not self.head:
self.head = Node(data)
return
current = self.head
while current.next:
current = current.next
current.next = Node(data)
def get(self, index):
current = self.head
for i in range(index):
if current is None:
return None
current = current.next
return current.data if current else None
# Array usage - fast random access
arr = [1, 2, 3, 4, 5]
print("Array access arr[2]:", arr[2]) # O(1)
# Linked List usage - fast insertions
ll = LinkedList()
for i in range(1, 6):
ll.append(i)
print("Linked list access ll.get(2):", ll.get(2)) # O(n)
# Array insertion (expensive)
arr.insert(0, 0) # O(n) - shifts all elements
print("Array after insert:", arr)
# Linked List insertion (cheap at head)
ll.head = Node(0) # O(1) - just update head pointer
print("Linked list after insert at head: done")Arrays excel at random access but struggle with insertions. Linked lists handle dynamic sizing and frequent modifications well but have slow access.
Implement a hybrid structure that combines benefits of both.
LIFO data structure.
"Last In, First Out operations."
class Stack:
def __init__(self):
self.items = []
def push(self, item):
self.items.append(item)
def pop(self):
if not self.is_empty():
return self.items.pop()
return None
def peek(self):
if not self.is_empty():
return self.items[-1]
return None
def is_empty(self):
return len(self.items) == 0
def size(self):
return len(self.items)
# Usage examples
stack = Stack()
# Browser back button simulation
stack.push("page1.html")
stack.push("page2.html")
stack.push("page3.html")
print("Current page:", stack.peek())
print("Going back...")
stack.pop()
print("Current page:", stack.peek())
# Expression evaluation (balanced parentheses)
def is_balanced(expression):
stack = Stack()
brackets = {')': '(', '}': '{', ']': '['}
for char in expression:
if char in '({[':
stack.push(char)
elif char in ')}]':
if stack.is_empty() or stack.pop() != brackets[char]:
return False
return stack.is_empty()
print("Balanced: (a+b)", is_balanced("(a+b)"))
print("Unbalanced: (a+b", is_balanced("(a+b"))Stack follows LIFO principle. Perfect for undo operations, function calls, expression evaluation, and backtracking algorithms.
Implement stack using two queues.
FIFO data structure.
"First In, First Out operations."
from collections import deque
class Queue:
def __init__(self):
self.items = deque()
def enqueue(self, item):
self.items.append(item)
def dequeue(self):
if not self.is_empty():
return self.items.popleft()
return None
def front(self):
if not self.is_empty():
return self.items[0]
return None
def is_empty(self):
return len(self.items) == 0
def size(self):
return len(self.items)
# Usage examples
queue = Queue()
# Print job simulation
queue.enqueue("Document1.pdf")
queue.enqueue("Document2.pdf")
queue.enqueue("Document3.pdf")
print("Next job to print:", queue.front())
print("Processing:", queue.dequeue())
print("Next job:", queue.front())
# BFS (Breadth First Search) simulation
def bfs_simulation():
queue = Queue()
visited = set()
# Start from node 1
queue.enqueue(1)
visited.add(1)
while not queue.is_empty():
current = queue.dequeue()
print(f"Processing node: {current}")
# Simulate neighbors (in real BFS, get actual neighbors)
neighbors = [current * 2, current * 2 + 1] # Example
for neighbor in neighbors:
if neighbor < 10 and neighbor not in visited: # Limit for demo
queue.enqueue(neighbor)
visited.add(neighbor)
bfs_simulation()Queue follows FIFO principle. Essential for task scheduling, breadth-first search, and any scenario requiring ordered processing.
Implement queue using two stacks.
Hash table implementation.
"Key-value storage with fast lookups."
# Simple HashMap implementation
class SimpleHashMap:
def __init__(self, size=10):
self.size = size
self.buckets = [[] for _ in range(size)]
self.count = 0
def _hash(self, key):
return hash(key) % self.size
def put(self, key, value):
index = self._hash(key)
bucket = self.buckets[index]
# Check if key exists
for i, (k, v) in enumerate(bucket):
if k == key:
bucket[i] = (key, value)
return
bucket.append((key, value))
self.count += 1
# Resize if load factor > 0.75
if self.count / self.size > 0.75:
self._resize()
def get(self, key):
index = self._hash(key)
bucket = self.buckets[index]
for k, v in bucket:
if k == key:
return v
return None
def _resize(self):
old_buckets = self.buckets
self.size *= 2
self.buckets = [[] for _ in range(self.size)]
self.count = 0
for bucket in old_buckets:
for key, value in bucket:
self.put(key, value)
# Usage
hm = SimpleHashMap()
hm.put("name", "Alice")
hm.put("age", 30)
hm.put("city", "NYC")
print("Name:", hm.get("name"))
print("Age:", hm.get("age"))
print("City:", hm.get("city"))HashMap uses hash function for O(1) average case operations. Collisions are resolved through chaining. Load factor triggers resizing.
Implement different collision resolution strategies.
Set implementation trade-offs.
"Choosing between hash-based and tree-based sets."
# Python set (hash-based) vs sorted containers
from sortedcontainers import SortedSet # Would need external library
# HashSet equivalent (unordered)
hash_set = set()
hash_set.add(30)
hash_set.add(10)
hash_set.add(20)
hash_set.add(10) # Duplicate, ignored
print("HashSet (unordered):", hash_set)
print("Contains 20:", 20 in hash_set) # O(1)
# TreeSet equivalent (ordered) - simulate with sorted list
tree_set = []
def add_to_tree_set(value):
if value not in tree_set:
tree_set.append(value)
tree_set.sort()
add_to_tree_set(30)
add_to_tree_set(10)
add_to_tree_set(20)
add_to_tree_set(10)
print("TreeSet (ordered):", tree_set)
print("Contains 20:", 20 in tree_set) # O(n) for search
# Performance comparison
import time
# Large dataset
data = list(range(10000))
# HashSet performance
start = time.time()
hash_set = set(data)
for i in range(10000):
i in hash_set
hash_time = time.time() - start
# TreeSet simulation
start = time.time()
tree_set = sorted(data)
for i in range(10000):
i in tree_set # Linear search
tree_time = time.time() - start
print(f"\nHashSet time: {hash_time:.4f}s")
print(f"TreeSet time: {tree_time:.4f}s")
print(f"HashSet is {tree_time/hash_time:.1f}x faster")HashSet provides O(1) operations but no ordering. TreeSet maintains sorted order with O(log n) operations. Choose based on access patterns.
Implement a custom set that combines both approaches.
Common solutions to recurring design problems.
Ensuring single instance of a class.
"Global access to unique object instance."
class Singleton:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
if not hasattr(self, '_initialized'):
self._initialized = True
self.data = "Singleton data"
# Usage
s1 = Singleton()
s1.data = "Modified by s1"
s2 = Singleton()
print("s2.data:", s2.data) # Same instance
print("Same instance?", s1 is s2) # True
# Real-world example: Database connection
class DatabaseConnection:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
if not hasattr(self, '_initialized'):
self._initialized = True
self.connection_string = "Connected to database"
print("Database connection established")
def query(self, sql):
return f"Executing: {sql}"
# Only one connection created
db1 = DatabaseConnection()
db2 = DatabaseConnection()
print("Same connection?", db1 is db2)
print(db1.query("SELECT * FROM users"))Singleton ensures only one instance exists. Useful for database connections, configuration managers, and logging systems.
Implement thread-safe singleton with different initialization strategies.
Object creation through factory methods.
"Delegating object creation to factory classes."
from abc import ABC, abstractmethod
# Product interface
class Shape(ABC):
@abstractmethod
def draw(self):
pass
# Concrete products
class Circle(Shape):
def draw(self):
return "Drawing a circle"
class Square(Shape):
def draw(self):
return "Drawing a square"
class Triangle(Shape):
def draw(self):
return "Drawing a triangle"
# Factory
class ShapeFactory:
@staticmethod
def create_shape(shape_type):
if shape_type == "circle":
return Circle()
elif shape_type == "square":
return Square()
elif shape_type == "triangle":
return Triangle()
else:
raise ValueError(f"Unknown shape type: {shape_type}")
# Usage
shapes = ["circle", "square", "triangle", "circle"]
for shape_type in shapes:
shape = ShapeFactory.create_shape(shape_type)
print(shape.draw())
# Real-world example: Database connection factory
class DatabaseConnection(ABC):
@abstractmethod
def connect(self):
pass
class MySQLConnection(DatabaseConnection):
def connect(self):
return "Connected to MySQL database"
class PostgreSQLConnection(DatabaseConnection):
def connect(self):
return "Connected to PostgreSQL database"
class DatabaseFactory:
@staticmethod
def create_connection(db_type):
if db_type == "mysql":
return MySQLConnection()
elif db_type == "postgresql":
return PostgreSQLConnection()
else:
raise ValueError(f"Unsupported database: {db_type}")
# Usage
db = DatabaseFactory.create_connection("mysql")
print(db.connect())Factory pattern encapsulates object creation logic. Client code depends on abstractions, making it easy to add new product types.
Implement Abstract Factory pattern for creating families of related objects.
One-to-many dependency between objects.
"Notifying multiple objects when state changes."
from abc import ABC, abstractmethod
# Observer interface
class Observer(ABC):
@abstractmethod
def update(self, message):
pass
# Subject
class NewsPublisher:
def __init__(self):
self._observers = []
self._latest_news = None
def subscribe(self, observer):
self._observers.append(observer)
def unsubscribe(self, observer):
self._observers.remove(observer)
def notify_observers(self):
for observer in self._observers:
observer.update(self._latest_news)
def publish_news(self, news):
self._latest_news = news
print(f"Publishing news: {news}")
self.notify_observers()
# Concrete observers
class EmailSubscriber(Observer):
def __init__(self, email):
self.email = email
def update(self, message):
print(f"Email to {self.email}: {message}")
class SMSSubscriber(Observer):
def __init__(self, phone):
self.phone = phone
def update(self, message):
print(f"SMS to {self.phone}: {message}")
# Usage
publisher = NewsPublisher()
email_sub = EmailSubscriber("user@example.com")
sms_sub = SMSSubscriber("+1234567890")
publisher.subscribe(email_sub)
publisher.subscribe(sms_sub)
publisher.publish_news("Breaking: New product launched!")
publisher.unsubscribe(email_sub)
publisher.publish_news("Update: Product selling fast!")Observer pattern enables loose coupling between subject and observers. Subject doesn't need to know concrete observer types.
Implement observer with different notification strategies (push vs pull).
Essential version control concepts and commands.
Basic Git workflow commands.
"Initializing repository and saving changes."
# Simulating Git workflow with file operations
import os
import shutil
import json
from datetime import datetime
class SimpleGit:
def __init__(self, repo_path):
self.repo_path = repo_path
self.git_dir = os.path.join(repo_path, '.git')
self.index = {} # Staging area
self.commits = []
def init(self):
os.makedirs(self.git_dir, exist_ok=True)
os.makedirs(os.path.join(self.git_dir, 'objects'), exist_ok=True)
with open(os.path.join(self.git_dir, 'HEAD'), 'w') as f:
f.write('ref: refs/heads/master\n')
print("Initialized empty Git repository")
def add(self, file_path):
if os.path.exists(file_path):
with open(file_path, 'r') as f:
content = f.read()
self.index[file_path] = content
print(f"Added {file_path} to staging area")
else:
print(f"File {file_path} does not exist")
def commit(self, message):
if not self.index:
print("Nothing to commit")
return
commit = {
'message': message,
'timestamp': datetime.now().isoformat(),
'files': self.index.copy()
}
self.commits.append(commit)
self.index.clear()
print(f"Committed: {message}")
print(f"Files changed: {len(commit['files'])}")
def status(self):
print("Staged files:")
for file in self.index:
print(f" {file}")
if not self.index:
print(" (nothing staged)")
# Usage demonstration
repo = SimpleGit("my_project")
repo.init()
# Create some files
with open("README.md", "w") as f:
f.write("# My Project\n\nThis is a sample project.")
with open("app.py", "w") as f:
f.write("print('Hello, World!')")
# Git workflow
repo.add("README.md")
repo.add("app.py")
repo.status()
repo.commit("Initial commit")
# Modify and commit again
with open("app.py", "w") as f:
f.write("print('Hello, Git!')")
repo.add("app.py")
repo.commit("Update greeting message")Git workflow: init/clone to start, add to stage changes, commit to save. This creates a history of your project's changes.
Implement git log to show commit history.
Synchronizing with remote repositories.
"Working with remote Git repositories."
# Simulating remote operations
import json
import os
from datetime import datetime
class RemoteRepository:
def __init__(self, name):
self.name = name
self.branches = {"main": []}
def push(self, branch, commits):
if branch not in self.branches:
self.branches[branch] = []
# Add new commits
existing_count = len(self.branches[branch])
self.branches[branch].extend(commits)
print(f"Pushed {len(commits)} commits to {self.name}/{branch}")
return existing_count
def fetch(self, branch):
if branch in self.branches:
return self.branches[branch].copy()
return []
class LocalRepository:
def __init__(self, name):
self.name = name
self.local_commits = []
self.remote_commits = []
self.remote = None
def set_remote(self, remote):
self.remote = remote
def commit(self, message):
commit = {
"message": message,
"timestamp": datetime.now().isoformat(),
"id": f"commit_{len(self.local_commits) + 1}"
}
self.local_commits.append(commit)
print(f"Committed locally: {message}")
def push(self, branch="main"):
if not self.remote:
print("No remote repository configured")
return
pushed_count = self.remote.push(branch, self.local_commits[len(self.remote_commits):])
if pushed_count < len(self.local_commits):
print(f"Successfully pushed to {self.remote.name}")
else:
print("Already up to date")
def fetch(self, branch="main"):
if not self.remote:
print("No remote repository configured")
return
remote_commits = self.remote.fetch(branch)
new_commits = len(remote_commits) - len(self.remote_commits)
if new_commits > 0:
self.remote_commits = remote_commits
print(f"Fetched {new_commits} new commits from {self.remote.name}")
else:
print("Already up to date")
def pull(self, branch="main"):
self.fetch(branch)
# Simple merge simulation - just update local with remote
if len(self.remote_commits) > len(self.local_commits):
self.local_commits = self.remote_commits.copy()
print(f"Merged remote changes into local branch")
def status(self):
print(f"Local repository: {self.name}")
print(f"Local commits: {len(self.local_commits)}")
print(f"Remote commits: {len(self.remote_commits)}")
if self.remote:
print(f"Remote: {self.remote.name}")
# Demonstration
remote = RemoteRepository("origin")
local = LocalRepository("my_repo")
local.set_remote(remote)
# Make some commits
local.commit("Initial commit")
local.commit("Add feature A")
local.commit("Fix bug B")
print("\nBefore push:")
local.status()
# Push to remote
local.push()
print("\nAfter push:")
local.status()
# Simulate another developer pushing
remote.push("main", [{"message": "Remote commit", "timestamp": datetime.now().isoformat(), "id": "remote_1"}])
print("\nAfter remote push:")
local.fetch()
local.status()
print("\nPull to merge:")
local.pull()
local.status()Push uploads your commits, pull downloads and merges, fetch only downloads without merging. Use fetch to see changes before merging.
Implement merge conflict detection and resolution.
Organizing development with branches.
"Managing multiple lines of development."
# Simulating Git Flow branching strategy
import json
from datetime import datetime
class GitBranch:
def __init__(self, name):
self.name = name
self.commits = []
def commit(self, message):
commit = {
"message": message,
"timestamp": datetime.now().isoformat(),
"branch": self.name
}
self.commits.append(commit)
print(f"[{self.name}] Committed: {message}")
class GitRepository:
def __init__(self):
self.branches = {"main": GitBranch("main")}
self.current_branch = "main"
def create_branch(self, name, from_branch="main"):
if name in self.branches:
print(f"Branch {name} already exists")
return
# Copy commits from source branch
self.branches[name] = GitBranch(name)
self.branches[name].commits = self.branches[from_branch].commits.copy()
print(f"Created branch '{name}' from '{from_branch}'")
def checkout(self, branch_name):
if branch_name not in self.branches:
print(f"Branch {branch_name} does not exist")
return
self.current_branch = branch_name
print(f"Switched to branch '{branch_name}'")
def commit(self, message):
self.branches[self.current_branch].commit(message)
def merge(self, source_branch, target_branch="main"):
if source_branch not in self.branches or target_branch not in self.branches:
print("Branch does not exist")
return
source_commits = self.branches[source_branch].commits
target_commits = self.branches[target_branch].commits
# Simple merge - add new commits
new_commits = []
for commit in source_commits:
if commit not in target_commits:
new_commits.append(commit)
self.branches[target_branch].commits.extend(new_commits)
print(f"Merged {len(new_commits)} commits from '{source_branch}' to '{target_branch}'")
# Git Flow demonstration
repo = GitRepository()
# Start with main branch
repo.commit("Initial release")
# Create develop branch
repo.create_branch("develop")
repo.checkout("develop")
repo.commit("Start development")
# Create feature branch
repo.create_branch("feature/user-auth", "develop")
repo.checkout("feature/user-auth")
repo.commit("Add user authentication")
repo.commit("Add password hashing")
repo.commit("Add login/logout")
# Merge feature back to develop
repo.checkout("develop")
repo.merge("feature/user-auth", "develop")
# Create release branch
repo.create_branch("release/v1.1", "develop")
repo.checkout("release/v1.1")
repo.commit("Bump version to 1.1")
repo.commit("Update documentation")
# Merge release to main and develop
repo.checkout("main")
repo.merge("release/v1.1", "main")
repo.checkout("develop")
repo.merge("release/v1.1", "develop")
# Hotfix example
repo.create_branch("hotfix/security-patch", "main")
repo.checkout("hotfix/security-patch")
repo.commit("Fix security vulnerability")
# Merge hotfix to both main and develop
repo.checkout("main")
repo.merge("hotfix/security-patch", "main")
repo.checkout("develop")
repo.merge("hotfix/security-patch", "develop")
print("\nFinal state:")
for branch_name, branch in repo.branches.items():
print(f"{branch_name}: {len(branch.commits)} commits")Git Flow uses multiple branches for different purposes: main for production, develop for integration, feature branches for development, etc.
Implement pull request workflow with code review simulation.