Posts

CST 462S - Learning Journal (service learning)

Working with the American Technology Initiative was really fun, informative, and rewarding. I spent most of my time on their volunteer portal, a prototype level app that makes volunteer reporting way easier. The codebase was almost release ready, so I got to dive in, explore its parts, and see how a familiar tech stack scales up for real users. Since the core features were already in place, I focused on user testing: hunting bugs, suggesting little tweaks, and polishing usability. Even though these were small changes, it felt great knowing they’d make a real difference for volunteers. If there’s one thing I’d change, it’d be the length of the service learning project. 25 hours (over eight weeks) barely gives you time to get comfortable with a new codebase. From my experience, you need at least forty hours just to learn the ropes and even more to add substantial features. I’d love to see future SL programs run at least 120 hours (about six months) so you can really dig in. The highlight...

CST 334 Week 8

This week, as we wrapped up CST 334, I reflected on persistence and how it applies to both operating systems and this course. In operating systems, persistence ensures that data remains intact and accessible even when the system restarts or faces power failures. This concept ties closely to topics like file system abstraction, which simplifies the way storage is managed, and the use of disk drives to store data reliably. Understanding these topics made me appreciate the detailed mechanisms, such as DMA and polling, that make this reliability possible. As a student, persistence took on a different meaning—staying consistent and determined despite challenges. From navigating early weeks filled with shell scripting and C programming to mastering memory virtualization and concurrency, this course required patience and focus. Each week built on the last, culminating in a better understanding of how operating systems keep systems efficient, organized, and reliable. This journey showed me not...

CST 334 Week 7

This week in CST 334, I got to learn a lot about how operating systems handle persistence and manage storage devices. The Hardware Interface of a Canonical Device and the Simple Device Model helped me see how the OS talks to hardware using registers like status, command, and data. I also found the I/O Architecture really interesting, especially how buses connect the CPU, memory, and other devices to make everything work together. Understanding how the OS reads and writes to registers, whether through explicit I/O instructions or memory-mapped I/O, made it clear how these interactions are happening.  One thing that stood out was Direct Memory Access (DMA), which takes over data transfers from the CPU to speed things up. I also learned about polling, where the OS checks a device’s status repeatedly, and how that compares to interrupts. On the storage side, the concept of File System Abstraction was really helpful in showing how the OS organizes and accesses data on disk drives. Over...

CST 334 Week 6

This week I learned about semaphores and their role in synchronizing threads to prevent concurrency issues. The focus was on solving problems like the producer-consumer scenario, where shared resources must be accessed in a controlled manner to avoid conflicts or resource starvation. In the lab, I worked on implementing synchronization using semaphores, which helped me understand their practical application and importance in maintaining program stability. The readings and lectures covered the foundational concepts of semaphores and monitors, providing a clearer picture of how these tools are used to manage thread communication effectively. This week’s work deepened my understanding of concurrency challenges and how semaphores can ensure safe and predictable thread interactions.

CST 334 Week 5

This week I focused on concurrency and how threads operate within a program. In the lab, I worked on a program that demonstrated how using the same variable across multiple threads can lead to issues, such as unexpected behavior and incorrect outputs. This highlighted the challenges of managing shared state in concurrent programming and the importance of addressing these problems effectively. Alongside the lab, I studied thread APIs, locks, and synchronization, which are key tools for controlling thread behavior and preventing issues like race conditions. These concepts helped me understand how to coordinate threads to ensure that programs run smoothly and reliably. Overall, this week gave me a solid understanding of the complexities of concurrency and how to handle them using the right techniques.

Week 4 CST 334

This week, I focused on understanding memory virtualization and its role in operating systems. I explored various techniques such as segmentation, paging, and swapping. Segmentation organizes memory into logical sections like code and data, which works well for modular programs, though it can lead to external fragmentation. Paging, on the other hand, divides memory into fixed-size pages, minimizing gaps but potentially causing internal fragmentation when pages are underutilized. I also learned about segmented paging, a hybrid approach that combines the benefits of both methods but adds complexity. Additionally, I studied swapping, where the system moves data between RAM and disk to handle memory shortages. However, excessive swapping can lead to thrashing, significantly degrading system performance. It was fascinating to see how these techniques work together to optimize memory management. In the lab, I implemented a FIFO (First-In-First-Out) page replacement algorithm to study the...

CST 334 Week 3

This week, I got into memory virtualization and learned a lot about how operating systems handle memory. I covered things like address spaces, the Memory API, address translation, and segmentation. Each concept helped me understand how the OS keeps processes isolated and organized in memory. Address spaces and segmentation, in particular, showed me how the OS keeps different processes from interfering with each other by giving each one its own memory space. I also spent time with Linux commands like grep, sed, and awk and got into using regex in bash. Since I’ve worked with regex in Java before, it was fun to see it in action on the command line. Regex makes text processing really efficient, and using it alongside grep and sed made sorting through files a lot easier.  On the coding side, I worked with inter-process communication (IPC) in C, mainly using pipes, and practiced using Makefiles to keep everything organized. Using fork() and exec(), I could create child processes that ru...