Project Journal for Aidan McGoogan

=============== Week 14 ================

Entry 4: -----------------------------------------------------------------

Date: April 18th
Start Time: 11:30am
Duration: 2.5 hours
  • We are meeting before our 1pm scheduled final checkoff.
  • It’s my job to ensure the audio sensor code and the jetson UART code works on the final hardware for the demo.
  • At 1pm, Dr. Walter’s and Shivam came over and we showed them our project to get all the checkoffs.
  • The process of showing them everything took about 1 hour and we successfully got 5/5 PSDRs and 2/2 stretch PSDRs.
  • This is my last journal since we finished a week early and got the 1% extra credit boost.

Entry 3: -----------------------------------------------------------------

Date: April 16th
Start Time: 10:30pm
Duration: 2 hours
  • I ended up coming back to lab tonight.
  • Trying to get the UART connection debugged.
  • I got the light.elf build error fixed by changing the jetson_uart.c file to a .cpp file and changing some syntax in the .h file.
  • I spent probably a hour and a half debugging and trying different things to get a base UART message sending to the Jetson.
  • In the end, I realized that I had my TX and RX wires crossed and as soon as I switched them around, I was getting a successful serial connection:

  • example2
  • After this works I want to go back and try and get the live color on the LEDs from adaptive mode to show on the Jetson terminal.
  • I uncommented all that code and successfully got the colors to show:

  • example2
  • This will get us checked off for the Jetson UART hardware PSDR!

Entry 2: -----------------------------------------------------------------

Date: April 16th
Start Time: 4:30pm
Duration: 3.5 hours
  • Started this session by getting the 5V 4A barrel jack power supply from the TA.
  • I tried powering the Jetson back on using this power supply but it was unresponsive.
  • I decided to reflash the SD card to restart the setup process.
  • I attained a microSD to normal SD adapter from Shivam to flash the Nvidia OS from my laptop like I did Tuesday.
  • I plugged the microSD back into the Jetson and restarted the setup process.
  • This is what my setup looks like: I have the monitor, keyboard, mouse, power supply, and microSD card plugged into the Jetson.

  • example2
  • I, again, followed all of the setup directions and recommendations from the Nvidia website.
  • My goal is to get the bare minimum working to demonstrate the UART connection from the microcontroller to the Jetson.
  • I think I am going to try to get the Jetson terminal to display the currently displayed color while in adaptive mode. This should satisfy the stretch PSDR.
  • My first step was to make the UART connection from the Jetson to the dev board that had the microcontroller. I connected ESP32 TX (GPIO10) to Jetson RX (Pin 10) and connected ESP32 RX (GPIO9) to Jetson TX (Pin 8).
  • I started on a new Git branch, “Jetson”.
  • I first started writing code for the microcontroller. I made new files, jetson_uart.c and jetson_uart.h:

  • example2 example2
  • I then call jetson_send_color(color) from the FFT program right after the object is instantiated.
  • With this in a good place, I then switched to programming a python script that would use pyserial to print out the UART serial connection.

  • example2
  • I was trying to install pip and pyserial but was confused why none of these basic libraries were responding. I then realized that the Jetson wasn’t connected to any network. So I grabbed an Ethernet cable and plugged it in.
  • Upon running the python script for the first time, I am getting permission denied errors for ‘/dev/ttyTHS1’.
  • ChatGPT helped me diagnose this since it is an error I’ve never seen before. I ran:
  • sudo usermod -a -G dialout $USER
    	sudo reboot
  • The python script ran after this but wasn’t receiving any UART data from the microcontroller.
  • I went back to the micro code and realized there were a lot of errors. I added some includes and moved some function definitions.
  • At the end of this session, I removed all errors in the build but the build still failed with “light.elf”. I don’t know what that means.
  • I pushed my current code to my Github branch. I will have to resume this work Friday.

Entry 1: -----------------------------------------------------------------

Date: April 15th
Start Time: 9:30am
Duration: 3 hours
  • We began this lab session with the central meeting where Dr. Walter explained to us that this was the second to last week and it is getting down to the wire for completing these projects.
  • Dr. Walter and the TAs then came over to our team’s lab station to talk to us as a group. He wanted to see how much more work we needed to complete before the end of the semester.
  • The group and myself think we could get the project done before the week is over if we work really hard.
  • I then talked with the group for them to catch me up on what I missed last week. It sounds like everything is ready except for the buck converter and there are issues with the audio sensor on the actual test board. Last week the code worked on the dev board but for some reason the microphone just causes the microcontroller to continuously reset.
  • We need to fix these problems this week.
  • I learned that Gavin last week thought it was an issue with my code so he wrote new audio sensor code to test with. I was confused why he concluded my code didn’t work for him.
  • I tested my code again today to show him that my code works and I was right that it shows the dominant frequency, magnitude, and changes the color properly.

  • example2
  • You can see it is moving between red, green, and blue on the led strip when I was playing a song. This shows that my code is working so it must be a hardware issue I believe.
  • While Kyle was diagnosing the hardware issue, I moved on to try and get the Jetson Nano working.
  • I went through these steps to get the system set up:
  • I flashed the microSD card with the Nvidia OS and then plugged it into the Jetson with the lab station monitor, keyboard, and mouse plugged into the Jetson. I used a 5V, 2A barrel jack power supply to power the Jetson.
  • I was able to get to the setup steps on the OS but when I completed these steps the power supply stopped working and the Jetson shut off. I think I need more amperage than 2A (ideally 4A) for the Jetson to work optimally.
  • My next step here is to get a stronger power supply from Shivam.
  • At this point, Kyle had finished soldering a new board and we tried testing the audio sensor again. Gavin flashed my audio sensor code to the microcontroller and it seemed to work. Getting the dominant frequency to display and actually show the lights was kind of buggy and didn’t work as reliably as on the dev board but I think it was still enough for the PSDR checkoff.
  • When I come back this afternoon, hopefully I can get a better power supply from Shivam and continue working on the Jetson.

=============== Week 13 ================

Entry 1: -----------------------------------------------------------------

Date: April 8th
Start Time: 9:30am
Duration: 3 hours
  • Today we had ManLab. Dr. Walter brought us to the middle of the lab talking about what our focus should be this week.
  • I know this week will be a lighter week for me since I am visiting UPenn as a final candidate for master’s school. I won't be back until Friday so it is unlikely that I will hit my 8 hours this week.
  • Our team is ready to check off almost all of our PSDRs very soon. My estimation is by this time next week, we will have all of the code in the same esp-idf project and have it run on the microcontroller on our final PCB.
  • Today my goal is to get the microphone code working in Gavin’s project environment and on his shared dev board with the lights. I want to get my audio sensor FFT processing to successfully change the color of the lights.
  • I was getting build errors when running my code in Gavin’s main.cpp. I discovered that I was initializing the C++ object, adc_oneshot_chan_cfg_t, improperly. Here is what I changed it to:
  • adc_oneshot_chan_cfg_t chan_cfg = {
    		.atten = ADC_ATTEN_DB_11,
    		.bitwidth = ADC_BITWIDTH_12
    	};
  • This finally built and I was able to flash and monitor on the esp32. I put the same 500Hz Youtube sound up to the microphone to ensure that it was printing the dominant frequency with its corresponding magnitude and I am happy with the results:

  • example2
  • According to Google Gemini:
    Audible Range:
    The human ear can typically perceive sounds ranging from 20 Hz (low rumble) to 20,000 Hz (20 kHz, high-pitched whistle).
    Common Frequency Ranges:
    Bass: 20-250 Hz (sub-bass, 20-60 Hz, and bass 40-250 Hz)
    Low-Mids: 250-500 Hz
    Mids: 500 Hz - 2 kHz
    High-Mids: 2-4 kHz
    Highs: 4 kHz - 20 kHz (treble, overtones, percussive notes)
  • Therefore, I need to think about increasing my sampling size to cover this entire audible spectrum.
  • As a test, I increased the sample size to 20kHz and the FFT size to 1024 and set print statements to make sure the delay between samples wasn’t getting too long which would indicate the microcontroller is getting overrun with compute.
  • int64_t elapsed = esp_timer_get_time() - start_time;
    	printf("Sample time: %lld us\n", elapsed);
  • If elapsed is significantly more than 1024 * (10^6 / 20000) ≈ 51000 us, I am falling behind.
  • The values I was getting were in the 2,000,000 µs range which is like 2 seconds which is way too slow. I could fix this using DMA and other hardware optimization methods but instead I think I am going to bring the sample size back down to 4kHz which is the upper bound for “high-mids” in the audible frequency band which should be satisfactory for a basic music to color translator.
  • After running this code, I was getting more appropriate values for the elapsed time.
  • I now need to make helper files that perform the frequency to color mapping for me. I will create a new object struct in a new freq_color_mapping.h file that you can see here:

  • example2
  • And here is the corresponding freq_color_mapping.c file here that has the function map_frequency_to_color that will be called in my FFT.cpp:

  • example2
  • Here is the FFT.cpp main function called fft_control_lights that will be called by Gavin’s main function:

  • example2
  • After this I tried to build the project from new and I was getting an error with “light.elf” which I really don’t know what it means. I am done for the day and will resume working Friday when I get back from Philadelphia. Gavin is done with his parts so he will try to debug it while I’m gone.

=============== Week 12 ================

Entry 4: -----------------------------------------------------------------

Date: April 4th
Start Time: 1:30pm
Duration: 3 hours

  • In this session, I will work with Gavin to combine our project folders so that I can get my microphone code integrated with his LED light code.
  • He has a Github project repository already set up and he already has a large file ecosystem so I am just adding my microphone file into his.
  • We had some difficulties configuring my computer to work with the new project folder. Mainly I just had to download the esp-matter library for it to build for the first time.
  • After that, I worked on changing my FFT code so that it can be called by the microcontroller scheduler. I want one function that can repeatedly be called that does everything from sampling to color and brightness change.
  • I wanted to run an intermediate test but Gavin and I quickly realized that the legacy driver/adc.h library that I have previously been using won’t work with the updated driver in his project directory.
  • I have to switch my code over to using adc_oneshot which has the updated signal processing functions.
  • This is the FFT.cpp file after this session.

  • example2 example2
  • Here is the new FFT.h file after this session that can now allow these FFT functions to be called by main:

  • example2
  • I spent the rest of the session turning in everything I need to for the week on Brightspace. Also I spent time converting my google doc project journal into html.

Entry 3: -----------------------------------------------------------------

Date: April 1st
Start Time: 2:30pm
Duration: 1 hour
  • At my desk at home I plugged in a USB keyboard, mouse and HDMI monitor into the Jetson but realized I was missing a power supply.
  • There are two options for supplying power to the Jetson Nano, one of which is through a micro USB port, the other is through a barrel jack.
  • I have a micro USB cord but the power brick associated with it is only able to supply a maximum of 2A which is not enough for the AI model I want to attempt to run on the Jetson.
  • I want to supply power to the Jetson in a more controlled way so I will try again with the power supply in lab with a barrel jack cord to supply the 5V with at least 3A to give the machine enough power.
  • Also I read that there is an SD card that is used for flashing the NVIDIA Jetson operating system. This is something that I am unfamiliar with and will have to test out in lab as well.

Entry 2: -----------------------------------------------------------------

Date: April 1st
Start Time: 9:30am
Duration: 2 hours
  • ManLab today. At the beginning of class, I caught up with the group on what we’ve all been working on. It sounds like everyone is making good progress on their subcomponents and we should be ready to get some of our PSDR’s checked off.
  • I moved my microphone over to Gavin’s dev board because he has the light strip connected.
  • I also got his lightstrip code from our shared Github repository.
  • I spent the rest of class researching and planning how to program the Jetson Nano.
  • The cool thing about the Whisper is that it can handle more complex requests and can translate multiple languages. Vosk is smaller in weight but I don’t think it can translate multiple languages.
  • What I’ve concluded is the data stream will go: Input chunk of raw audio as .wav file via UART from ESP32 → Jetson Nano running STT (speech-to-text) model → output red, green, blue values based on keyword sent back to the ESP32 via UART.

Entry 1: -----------------------------------------------------------------

Date: March 31st
Start Time: 12:30pm
Duration: 4 hours
  • Went back to my sampling code to try and get the base FFT case working again. As I think about it more, FFT seems to be the most widely used algorithm and that must be for a reason. For my experience level, I need an algorithm that has the most support tools available on the internet to help me even if it might not be the most optimal algorithm for performance in this use case.
  • I reorganized my old working audio sampling code as a starting place.

  • example2
  • After I had the sampling working again, I wanted to move to implementing FFT but in a slow, TDD way so that I could isolate errors and build upon each test.
  • The first thing to do when using esp-dsp for FFT is initializing the FFT structure. I wrote a function to do this that would return false if there were errors.
  • // Initialize FFT structures
    	bool initialize_fft() {
    		esp_err_t ret = dsps_fft2r_init_fc32(NULL, FFT_SIZE);
    		if (ret != ESP_OK) {
    			printf("FFT initialization failed with error: %d\n", ret);
    			return false;
    		}
    		return true;
    	}
  • The function passed so I moved on to the next step, running the FFT algorithm. Here is the function I wrote:
  • void perform_fft(float* complex_buffer) {
    		// Execute FFT in-place on complex data
    		dsps_fft2r_fc32(complex_buffer, FFT_SIZE);
    		// Bit-reversal for proper ordering of FFT output
    		dsps_bit_rev_fc32(complex_buffer, FFT_SIZE);
    		// Convert FFT complex results to magnitude spectrum
    		dsps_cplx2reC_fc32(complex_buffer, FFT_SIZE);
    	}
  • This ran with no errors but I need to print out values to visualize the algorithm working. Here I print out the first 10 frequency bins for each sampling cycle. This test was at ambient quiet lab volume.

  • example2
  • Since I am getting negative values for magnitude, it seems there is something wrong with the dsps_cplx2reC_fc32(complex_buffer, FFT_SIZE); calculation. I decide to calculate magnitude on my own instead using: sqrt(real** + imag**);
  • This gives me valid positive outputs.
  • I now tested it with a constant 500 Hz tune played from my phone. Interestingly, the frequency bin that registered the greatest output magnitude was bin 36 [362 hz] instead of the expected bin 32 [500 hz].

  • example2
  • I suspect this error is due to some timing bias in the microcontroller. The function I am using in my sampling function, esp_rom_delay_us(), doesn't guarantee exact timing due to overhead. A better approach is to use a hardware timer and esp_timer.h to precisely control the sampling interval.
  • For now, I am going to just calculate the bias and just change my sampling rate to reflect that.
  • Actual sampling rate = Expected frequency (500 Hz)×FFT size (256) / Measured bin number (36) ≈ 3555.6Hz
  • This didn’t help fix the bias I was getting before, if anything it made it worse. It turns out that changing my #define SAMPLE_RATE doesn't actually change the real sampling speed. The real sampling rate is controlled by the delay (esp_rom_delay_us) in my sampling loop.
  • So I avoided the shortcut and went back and made the sampling frequency 4000 hz and included esp_timer.h library. I also then tweaked my sample_audio() function to use the more accurate hardware timer.

  • example2
  • After reconfiguring the project, I ran the new code with my 500 hz noise and got a very good output:

  • example2
  • As you can see, I get a very consistent spike in magnitude in the 500 Hz bin showing my FFT algorithm works!
  • Here is the working code after this session:

  • example2 example2
  • My next step is to show my teammates in lab tomorrow and get this code integrated with the lightstrip code so that it can start controlling the colors automatically.

=============== Week 11 ================

Entry 6: -----------------------------------------------------------------

Date: March 28th
Start Time: 1:30pm
Duration: 2 hours

  • After a lunch break I resumed working on the A9 - Legal document. I am doing research on current patents that would make sense to put in the Legal Liability Analysis section.
  • Finished the draft of that section. I then sent the draft to my teammates for them to peer review over the next day and then I’ll submit it tomorrow (Saturday).
  • I then spent more time updating my journal and getting the weekly assignments submitted on Brightspace.

Entry 5: -----------------------------------------------------------------

Date: March 28th
Start Time: 9:00am
Duration: 3.5 hours

  • Moving on to A9 - Legal and Regulatory Analysis document.
  • Going to put in a work session for this as it is due Saturday.
  • Finished the draft of the Regulatory Analysis section
  • Moving onto the microcontroller again. I want to try and debug my issue from yesterday with fresh eyes and energy.
  • Gino is coming later too to help me if I need it.
  • I fixed the issue right away this morning. When I booted up VSCode with the ESP plugged in, it noticed I was missing the ESP-IDF immediately and prompted me to download it through the VSCode extension. This helped me work around the issues I was having yesterday of not being able to download the library from the command line because I was stuck in the old virtual environment.
  • I ran my code with assertions that the kernel.h is giving me valid values. For test driven development, I am using the unity.h library which is preinstalled with the ESP-IDF. Here is the code and output for this test:

  • example2 example2
  • I then tried to move on to another unit test of testing the mdct case for a zero input. This revealed a lot of issues with double instantiations of global variables that I had to fix as well as noting that the program was dereferencing a null pointer. I had to add the assertions:
    • assert(input_buffer != NULL);
    • assert(mdct_out != NULL);
  • This assertion failed. I was also getting warnings when I flashed saying the use of driver/adc.h was an outdated library and that I should move to esp_adc/adc_oneshot.h. I tried to move my code to this library. This led to more errors with libraries and I then went down a debugging rabbit hole with issues in my ESP-IDF library and more. After moving and deleting some things that I thought might help, I am stuck on the issue of my build getting stuck at this line every time:
  • “[1186/1193] Completed 'bootloader' ”
  • I am going to take a break from this for a bit and I hope when I come back another time I can resolve this issue and move efficiently like I did with the last persistent issue.
  • I will have to resume this work next week. Gino is taking the dev board for the weekend to work on the TFT display.

Entry 4: -----------------------------------------------------------------

Date: March 27th
Start Time: 1:30pm
Duration: 1 hour

  • After lunch I resumed trying to fix the double semicolon issue. I am very stuck and frustrated.
  • I think I’m having issues with my virtual environment (venv) now.
  • I deleted the esp-idf library and tried to reinstall it, but it keeps giving me errors saying I’m still in the original venv and can’t install esp-idf while being in a venv.
  • I also can’t deactivate the venv because it isn’t recognizing the command.
  • I feel like every time I take one step forward (e.g., making a kernel window function for the MDCT algorithm), I move five steps backward with these errors.
  • I am done for today on the microcontroller. Will resume tomorrow.

Entry 3: -----------------------------------------------------------------

Date: March 27th
Start Time: 10:00am
Duration: 2.5 hours

  • After reviewing this research and having another discussion with ChatGPT on which algorithm is optimal for my use case, I decided upon the SMDCT with kernel windowing.
  • Set up new files, mdct.c and mdct.h, to begin TDD.
  • First wrote a function to precompute the kernel window.

  • example2
  • Received memory errors during build, so I ran idf.py size to analyze memory usage on the ESP32-S3.

  • example2
  • Determined that large global variables (kernel and window) were occupying too much DIRAM; switched them to pointers:
    • float **kernel;
    • float *window;
    • int16_t *input_buffer;
    • float *mdct_out;
  • After that change, idf.py size showed reduced memory usage.

  • example2
  • However when running,the monitor froze on the line: “I (145) boot: Disabling RNG early entropy source...” indicating a likely IRAM overflow.
  • According to my research, IRAM likely contains sinf/cosf calls from math.h, which are intensive for the microcontroller.
  • To avoid this, I used ChatGPT to generate a precomputed kernel.h lookup table, offloading computation from the microcontroller.
  • Spent at least an hour trying to figure out why my code won’t flash — very frustrating process.
  • Encountered error: SERIAL_TOOL=/Users/aidanmcgoogan/.espressif/python_env/idf5.4_py3.13_env/bin/python;; — indicating the build file has a corrupted path with two semicolons.
  • Still struggling to fix it. Took a break for lunch.

Entry 2: -----------------------------------------------------------------

Date: March 25th
Start Time: 2:30pm
Duration: 1 hour


Entry 1: -----------------------------------------------------------------

Date: March 25th
Start Time: 9:30am
Duration: 2 hours

  • This was Tuesday’s Mandatory Lab session.
  • Dr. Walter met with all of us at the beginning of class giving a rundown of what is due this week and what are the priorities.
  • After discussion with the group, I decided to take the responsibility of the individual assignment for this week since this is my least busy week for basically the rest of the semester.
  • Spent the rest of the session beginning research for the A9 - Legal and Regulatory Analysis.

=============== Week 10 ================

Spring Break

=============== Week 9 ================

Entry 4: -----------------------------------------------------------------

Date: March 14th
Start Time: 10:00am
Duration: 2.5 hours
  • I began this session by reviewing my learnings from yesterday and starting from the beginning in my research of FFT.
  • Discovered a Reddit post discussing the Sliding DFT (SDFT) algorithm, which I had never heard of before.
  • Also found a Stack Overflow post discussing real-time FFT processing and SDFT.
  • Pivoted my research from FFT to a deep dive into SDFT.
  • Learned that SDFT removes the FFT constraint requiring buffer sizes to be powers of 2.
  • Found a useful article explaining SDFT:
  • Here are the conclusions I drew from my research of both the FFT and SDFT algorithms:
    • When FFT is a Good Choice:
      • If I need to analyze multiple frequency bands (e.g., different colors for bass, mid, treble).
      • If I have enough processing power to compute FFT efficiently.
      • If slight delay is acceptable (e.g., lights react within ~100ms instead of instantly).
    • When Sliding DFT is a Good Choice:
      • If I want a near-instant response (light changes without noticeable delay).
      • If I’m only tracking a dominant frequency (instead of a full spectrum).
      • If computational efficiency is important (avoiding high CPU usage on the ESP32).
  • Concluded that SDFT aligns more with our use case and plan to implement it after spring break.
  • Spent time posting my journal to the website and completing the journal summary Brightspace assignment.

Entry 3: -----------------------------------------------------------------

Date: March 13th
Start Time: 2:30pm
Duration: 2 hours
  • Spent time confirming the bugs in my code.
  • Had a friend review the problem and he suggested going back to the function definitions in the esp_dsp library to see how the variable is dereferenced at the source.
  • Dug deep into the issue and found that I was not initializing two critical variables related to the FFT table buffer.
  • Went down the rabbit hole of the esp_dsp library and read through the following files to check if I was implementing each function correctly:
    • dsps_fft4r.h
    • dsps_fft4r_fc32_ansi.c
    • dsps_fft4r_fc32_ae32.c
  • Realized that I need to do more research on the FFT algorithm itself before continuing implementation.
  • Lesson learned: In the future, I should research the math behind a complex algorithm like this before jumping straight into the code.
  • Helped Kyle print the completed PCB footprint and validated that everything was the correct size.
Next Steps:
  • Conduct further research on FFT theory and implementation.

Entry 2: -----------------------------------------------------------------

Date: March 12th
Start Time: 10:00am
Duration: 1.5 hours
  • I spent an hour researching and planning the Nvidia Jetson stretch PSDR. I have experience implementing AI models for different applications so this was interesting to explore.
  • What I discovered in regards to the speech-to-text models, were the best ones for this lightweight use-case are Whisper, Vosk, or DeepSpeech; I also have a lot of experience with the Wav2Vec2 model from Meta that is available on HuggingFace’s Transformers library. All of these models take in .wav files and output text that I could feed into another script that changes the lights.
  • I am imagining a speech format like, “lights, green” for example that would first get the attention of the model and then know that the next word coming is the color to switch the LED strip light to.
  • With this software stack and plan in mind, I began researching how the hardware would interface. I have never used Nvidia Jetson modules before so I had to research which unit was optimal. In my research, I came to the realization that there are no Jetson Orin Nanos available to buy. They are on complete backorder for months so I don’t think this stretch PSDR will be possible.
  • I am still highly passionate about using a STT (speech-to-text) model with this product and I have found YouTube videos of people running these models right on the ESP32. If I can get our stretch PSDR changed, I would love to try and do this.
  • Next step is to talk to the team about what I found and talk to Dr. Walters at the next ManLab to suggest the stretch PSDR change.

Entry 1: -----------------------------------------------------------------

Date: March 11th
Start Time: 9:30am
Duration: 2 hours
  • Group meeting with Dr. Walters – focus for the week is ordering PCBs.
  • Resumed prototyping with microphone. My goal is to completely finish the FFT script this week.
  • Rewrote FFT script for better efficiency.

  • example2 example2
  • Built, flashed, monitored and got stuck right on this line:

  • example2
  • Tried diagnosing this problem with chatgpt. The first thing I tried was to manually hit the reset button after flashing and then flash again. This did not fix the issue.
  • I tried running “idf.py fullclean” which has fixed issues in the past. This did not fix the issue. Like before, the monitor loops continuously through this before ending with the entropy source line:

  • example2
  • I commented out the perform_fft() function call line in the main loop to just run the audio sampling and that works. So I believe I have an infinite loop in my perform_fft() function. This is the same issue I had when I was working on FFT weeks ago.
  • I then spent time doing research on open-source implementations of FFT code on the ESP32 that don’t use the Arduino library. This is what I found:
  • None of these sources give me ideas to change my code in any way. Will have to try again another day or get help from a TA who has done DSP on an ESP before.
Next Steps:
  • Seek help from a TA experienced in DSP on ESP32.

=============== Week 8 ================

Entry 4: -----------------------------------------------------------------

Date: March 7th
Start Time: 12:30pm
Duration: 1 hour
  • Delivered midterm design review presentation.
  • Main critical feedback was on schematic and PCB design.
  • Received positive individual feedback on the visuals, specifically the Gantt chart.
Contribution to Progress:
  • Gathered actionable feedback for refining PCB design.
Next Steps:
  • Incorporate feedback into PCB design revisions.

Entry 3: -----------------------------------------------------------------

Date: March 6th
Start Time: 3:30pm
Duration: 1.5 hours
  • Met with group to finalize slides for presentation.
  • Assigned speaking parts to each group member.
Results:
  • Ensured all group members were prepared for presentation.
Next Steps:
  • Deliver presentation and gather feedback.

Entry 2: -----------------------------------------------------------------

Date: March 5th
Start Time: 11:00am
Duration: 1.5 hours
  • Met everyone in lab and reported my main takeaways from previous day's presentations to the team.
  • Key insights:
    • Most teams focused on electrical schematic diagrams and PCB layouts.
    • Common feedback included labeling chips and avoiding acute angles in PCB wiring.
    • Kyle is the main contributor to the design of the PCB so most of the advice was directed toward him.
  • Gino, Gavin, and I worked on the other aspects of the presentation.
  • Created multiple graphics using LucidChart and ChatGPT.

  • example2

    example2

    example2

    example2

    example2
Results:
  • Team gained insights into strong presentation practices.
  • Completed key visuals for our own presentation.
Contribution to Progress:
  • Improved understanding of presentation expectations.
  • Prepared high-quality graphics to support our slides.
Next Steps:
  • Finalize slides and assign speaking parts.

Entry 1: -----------------------------------------------------------------

Date: March 4th
Start Time: 12:30pm
Duration: 2 hours
  • Reviewed presentations from Groups 4 and 7.
  • Asked detailed questions and provided constructive feedback.
Key Learnings:
  • Gained insights into common presentation strengths and weaknesses of other teams.
  • Groups put most emphasis on their schematics and PCB designs. This is what we will focus on in our presentation.
Next Steps:
  • Share these key takeaways with the team.

=============== Week 7 ================

Entry 4: -----------------------------------------------------------------

Date: February 28th
Start Time: 3:30pm
Duration: 2 hours

  • Prepared for team’s midterm design review next week.
  • Imported slides template from Brightspace and shared it with the group.
  • Focused on detailing and professionalizing the slides to improve project reception.
  • Posted journal for the week.
  • Planned to work on audio sensor and refine presentation slides over the weekend.
Contribution to Progress:
  • Organized team slides and speaker notes for midterm presentation.
  • Maintained clear project documentation.
Next Steps:
  • Continue working on audio sensor prototyping.
  • Refine and finalize midterm presentation slides.

Entry 3: -----------------------------------------------------------------

Date: February 27th
Start Time: 10:00am
Duration: 2 hours

  • Attempted prototyping LED strip light again but was unsuccessful.
  • Used ESP-IDF LED Strip Example as a reference.
  • Utilized ESP-IDF Issues and ChatGPT for debugging.
  • Reached expected terminal output "I (281) example: Start LED rainbow chase," but LED lights did not respond.
  • Spent time debugging dependencies but ended up with more issues.
  • Plan to show group in lecture to troubleshoot further.
Results:
  • Identified software was running correctly, but LED hardware might be faulty.
Learning:
  • Sometimes debugging requires fresh perspectives from teammates.
  • LED strip hardware failures can be tricky to diagnose.
Contribution to Progress:
  • Progressed LED debugging by isolating potential issues.
Next Steps:
  • Discuss LED strip issue with teammates during lecture.

Entry 2: -----------------------------------------------------------------

Date: February 25th
Start Time: 9:30pm
Duration: 2 hours

  • Attended MANLAB central group meeting with Dr. Walters.
  • Confirmed project progress aligns with expectations—prototyping should be done, and PCB near completion.
  • Spent second hour debugging RGB LED for prototyping.
  • Tested LED lights on power supply and multimeter.
  • Set up ESP-IDF project for LED control.
  • Encountered GPIO issues where ESP32-S3 froze during output configuration.
  • VS Code did not recognize ESP-IDF include paths despite correct CMakeLists.txt configuration.

example2
example2
Results:
  • Confirmed hardware tests show LED lights receive power.
  • Identified possible ESP32 GPIO configuration issues.
Learning:
  • GPIO pin selection is critical for proper device communication.
Contribution to Progress:
  • Confirmed team is on track for midterm design goals.
  • Advanced debugging of LED strip setup.
Next Steps:
  • Find working GPIO configuration for ESP32 LED control and finish prototyping.

Entry 1: -----------------------------------------------------------------

Date: February 24th
Start Time: 7:30pm
Duration: 2 hours

  • Edited CMakeLists.txt to include "REQUIRES esp-dsp freertos driver esp_timer".
  • Encountered issues with FFT implementation:
    • ESP32-S3 crashed with Guru Meditation Error: LoadProhibited.
    • Suspected memory corruption before debug prints appeared.
    • Checked for stack overflow or heap issues.
  • Debug steps taken:
    • Verified ADC sampling worked without FFT.
    • Confirmed issue originated in perform_fft() or find_peak_frequencies().
    • Checked free heap size before and after FFT execution.
    • Added debug prints to isolate crash points.
  • Researched working FFT examples but did not find an immediate solution.
Results:
  • Narrowed down FFT crash to memory-related issues I believe.

Learning:
  • ESP32 memory constraints must be carefully managed for FFT processing.
Contribution to Progress:
  • Improved understanding of ESP32 memory limitations.
Next Steps:
  • Continue debugging FFT implementation.

=============== Week 6 ================

Entry 3: -----------------------------------------------------------------

Date: February 21st
Start Time: 8:30am
Duration: 3 hours

  • Edited the functional description on the team website to reflect the advice of Dr. Walters.
  • Updated physical description of the product to align with plan tweaks made since the first couple weeks.
  • Researched and drafted FFT code.

  • example2 example2
Results:
  • Updated functional description on the team website to match revised plans.
  • Developed initial draft of FFT code.
Learning:
  • Revised functional descriptions should account for product evolution.
  • ESP-DSP library has built-in FFT functions that streamline implementation.
Contribution to Progress:
  • Ensured accurate documentation on the team website.
  • Prepared FFT implementation for future testing.
Next Steps:
  • Test FFT code on microcontroller Sunday/Monday when Gino returns it.


Entry 2: -----------------------------------------------------------------

Date: February 18th
Start Time: 9:30am
Duration: 2 hours

  • In ManLab, received feedback from Dr. Walters and GTAs on website project description and PSDRs.
  • Implemented necessary changes to the website after team discussion.
  • Ordered 3-pin LED lights from supply room for prototyping.
  • Discussed Gino’s TFT LCD display prototyping issues.
    • Suggested testing the voltage output of the TFT to determine if issue is with the component or microcontroller configuration.
Contribution to Progress:
  • Refined website content to align with project goals.
  • Enabled further prototyping with ordered LED components.
Next Steps:
  • Assist Gino in debugging TFT LCD display.

Entry 1: -----------------------------------------------------------------

Date: February 17th
Start Time: 1:45pm
Duration: 3 hours

  • Connected and flashed ESP32.
  • Set up and debugged ADC readings from MAX4466 microphone on ESP32-S3.
    • Diagnosed and fixed issue by switching from incorrect GPIO36 to correct GPIO4.
  • Fixed ESP32 ADC baseline issues by removing DC bias and amplifying signal variations.
  • Implemented a moving average filter to smooth out noise in audio data.

  • example2 example2
  • Developed a Python visualization script to plot real-time sound frequency and amplitude using Serial data.

  • example2
  • Resolved flashing issues with idf.py flash by killing processes locking the serial port.
  • Fixed ESP-DSP include errors by modifying main/CMakeLists.txt.
  • Resolved driver/adc.h missing dependency error.
  • Rebuilt ESP32 project using idf.py reconfigure, fullclean, and build commands.
Results:
  • Successfully obtained ADC readings from MAX4466 microphone.
  • Implemented signal processing improvements for more accurate readings.
Learning:
  • GPIO pin selection is critical for correct ADC readings.
  • Moving average filters improve signal clarity.
  • ESP-IDF dependencies must be explicitly included in CMakeLists.txt.
Contribution to Progress:
  • Enabled accurate sound capture for FFT analysis.
  • Prepared codebase for advanced signal processing.
Next Steps:
  • Draft FFT script with real-time audio input.

=============== Week 5 ================

Entry 5: -----------------------------------------------------------------

Date: February 14th
Start Time: 6:00pm
Duration: 2 hours

  • Completed the project packaging specifications table and revised the concluding paragraph in the A6 Mechanical Overview paper.
  • Messaged the team for peer review on the A6 Mechanical Overview paper.
  • Worked on posting my weekly journals to the website.
  • Cleaned up the formatting of the website to improve readability and structure.
Next Steps:
  • Write C++ script for microphone data collection on ESP32

Entry 4: -----------------------------------------------------------------

Date: February 14th
Start Time: 1:15pm
Duration: 3.5 hours

  • I started and finished A7 Bill of Materials document.
  • I sat with Kyle to learn how he was making the PCB design in KiCad.
  • I spent an hour debugging the connection between the microcontroller and my computer.
  • For future reference, I followed these steps to connect the microcontroller:
    • Plug in microcontroller.
    • Open project in VSCode through ESP-IDF extension.
    • Run “. $HOME/esp/esp-idf/export.sh” if idf.py is not recognized.
    • Run “idf.py --version” to check recognition.
    • Run “idf.py build” to compile main.cpp.
    • If errors occur, run “idf.py set-target esp32s3”.
    • Run “idf.py flash” to flash microcontroller.
    • Ensure the correct port using “ls /dev/cu.*” then use “idf.py -p /dev/cu.usbmodem2101 flash”.
    • Run “idf.py monitor” to execute the program.
    • Use “idf.py fullclean” to reset configurations.
    • Use “Ctrl + ]” to abort a running program.
  • I then worked on the PCB Footprint Layout for the A6 Mechanical Overview using LucidChart.

  • example2
Results:
  • Completed A7 Bill of Materials with all necessary components.
  • Gained foundational knowledge in PCB design using KiCad.
  • Successfully connected the ESP32 microcontroller to my computer.

  • example2
  • Created an initial PCB footprint layout for A6 Mechanical Overview.
Learning:
  • Learned how to navigate and troubleshoot ESP32 flashing issues.
  • Understood the importance of having multiple members proficient in PCB design.
  • Gained experience in creating a bill of materials and organizing part sourcing.
Contribution to Progress:
  • Finalized Bill of Materials, ensuring all parts are accounted for.
  • Established a knowledge transfer process for PCB design with Kyle.
  • Laid groundwork for ESP32 software development in C++.
  • Developed the initial functional block diagram for PCB layout.
Next Steps:
  • Rewrite microphone testing code in C++ for ESP32.
  • Continue refining PCB layout and schematic design.
  • Coordinate with Kyle to finalize footprint placements in KiCad.

Entry 3: -----------------------------------------------------------------

Date: February 14th
Start Time: 10:00am
Duration: 2 hours

  • Conducted research for A6 Mechanical Design Paper.
  • Identified the Aubric Intelligent WiFi LED Controller and Govee RGBIC LED Strip Lights as the best comparable products for different reasons respectively.
  • Wrote a draft for the A6 Mechanical Design Assignment.
Results:
  • Compiled market research findings into a structured format.
  • Completed the first draft of A6 Mechanical Design Assignment.
Learning:
  • Gained insight into key differences between leading LED controller products.
  • Improved skills in drafting mechanical design documents.
Contribution to Progress:
  • Gathered necessary product comparisons to support A6 Mechanical Design Paper.
  • Completed the initial draft of the A6 Mechanical Design Assignment.
Next Steps:
  • Refine and expand the A6 Mechanical Design Paper.
  • Review and incorporate additional mechanical considerations based on comparable product research.

Entry 2: -----------------------------------------------------------------

Date: February 12th
Start Time: 11:30am
Duration: 1 hour

  • I started working on A6 individual assignment.
  • I coordinated with Kyle for when we will meet to work on the PCB.
  • I planned out the CAD drawings needed for different parts for the mechanical overview.
Contribution to Progress:
  • Established a schedule for PCB work with Kyle.
  • Outlined necessary CAD drawings to support mechanical overview documentation.
Next Steps:
  • Continue working on A6 individual assignment.
  • Meet with Kyle to work on the PCB design.
  • Start drafting CAD drawings for the mechanical overview.

Entry 1: -----------------------------------------------------------------

Date: February 11th
Start Time: 9:30 AM
Duration: 2 hrs

  • I attended the central team meeting with Dr. Walters, where he emphasized that PCB design should not be handled by only one person. I volunteered to assist Kyle with the design process.
  • I communicated with Kyle to schedule a PCB design briefing so I can contribute effectively.
  • I then spent an hour troubleshooting how to flash the ESP32 microcontroller onto my laptop with Gino.
Results:
  • I identified that flashing the ESP32 on Mac in VSCode requires installing the ESP-IDF extension.
  • I was not able to fully connect the ESP32 yet but gathered useful troubleshooting methods.
Learning:
  • Microcontroller flashing involves multiple setup steps, including ensuring correct drivers and serial connections.
  • ESP-IDF has an official extension for VSCode that simplifies flashing.
Contribution to Progress:
  • Prepared to assist Kyle in PCB design, ensuring a backup team member understands the process.
Next Steps:
  • Meet with Kyle for a PCB design briefing and begin contributing to the schematic.
  • Continue debugging ESP32 flashing issues.

=============== Week 4: =================

Entry 4: -----------------------------------------------------------------

Date: February 7th
Start Time: 2:30pm
Duration: 3 hours

  • I completed and submitted the Component Analysis document on behalf of the team, ensuring all sections were finalized and well-structured.
  • I wrote the introduction for the Component Analysis, summarizing the project’s hardware selections and design considerations.
  • I researched how to connect the ESP32 to my laptop and flash MicroPython onto the microcontroller, as this was my first time working with MicroPython.
  • I explored different methods for flashing firmware and setting up a development environment for running MicroPython scripts.
  • I spent an hour cleaning up my project journal, importing photos, and organizing past journal entries for better documentation.
  • Moving forward, I will test the ESP32 connectivity by running my Python script and verifying serial communication.
  • Next, I will begin writing code for FFT signal processing.

example2


Entry 3: -----------------------------------------------------------------

Date: February 6th
Start Time: 7:30pm
Duration: 2 hours

  • I conducted research for the A5 - Component Analysis document, specifically focusing on selecting the optimal microphone for our product.
  • I evaluated multiple microphone options, considering factors such as signal quality, power requirements, ease of integration with the ESP32, and cost-effectiveness.
  • I compared analog vs. digital microphones, assessing the trade-offs between ADC compatibility and I2S complexity.
  • After weighing the pros and cons, selected the MAX4466 due to its built-in amplifier, compatibility with the ESP32’s ADC, and broad frequency response range (20Hz - 20kHz).
  • I learned that analog microphones (such as the MAX4466) provide simpler integration with microcontrollers using ADC, while digital I2S microphones offer higher quality but require more complex software implementation.
  • The MAX4466’s built-in preamp eliminates the need for an external amplifier, making it a more efficient choice.
  • Selecting components involves balancing technical capabilities, implementation complexity, and product scalability.


Entry 2: -----------------------------------------------------------------

Date: February 5th
Start Time: 10:30am
Duration: 1 hours

  • I attended a FaceTime meeting with the group to align on the decision made by Gavin regarding the LED strip selection.
  • We discussed the 3-pin vs. 4-pin LED strip options, with Gavin explaining his preference for the 3-pin LED strip.
  • We evaluated key factors, including individually addressable LEDs and market availability, ensuring the product remains adaptable to commonly used lighting solutions.
  • In result, we reached a team-wide consensus on using a 3-pin LED strip for the project.
  • I learned that Individually addressable LEDs provide greater flexibility for dynamic lighting effects and smart home integration. And that 3-pin LED strips are more widely available and compatible, making them a better choice for scalability and adaptability.
  • Moving forward, we will update the Component Analysis document to reflect the final LED strip decision. And we will begin researching control methods and programming techniques for the addressable LED integration with the ESP32.

Entry 1: -----------------------------------------------------------------

Date: February 4th
Start Time: 9:30am
Duration: 2 hours

  • I refined PSDRs, making key hardware and software scope decisions with the team.
  • The team decided that the TFT LCD display will have a wired connection to the microcontroller and be limited to displaying time, connectivity status, and possibly light color, with the majority of UI functionality being handled through the mobile app.
  • I updated the project website, adding document links and references to ensure accessibility for the team.
  • I then delegated sections of the A5 - Component Analysis document and scheduled a FaceTime meeting for tomorrow to discuss and coordinate the work.
  • We established clear functional expectations for each component and its role within the system. This was drawn on the whiteboard as seen below.

example2
Contribution to Progress:
  • We defined clear hardware and software boundaries for TFT LCD integration.
  • Finalized product choices, ensuring the team can move forward with ordering necessary components. That list can be seen below.

example2
Next Steps:
  • Meet with the team on FaceTime to discuss and finalize assignments for the Component Analysis document.
  • Begin working on the Component Analysis write-up for assigned sections.


=============== Week 3: =================

Entry 3: -----------------------------------------------------------------

Date: January 31st
Start Time: 9:30am
Duration: 3.5 hours

What I Worked On / How I Worked On It:
  • I spent several hours this morning researching ADC behavior on the ESP32, including how attenuation, resolution, and noise suppression affect signal quality.
  • I studied MicroPython’s ADC API, ESP32 datasheets, and signal processing techniques to optimize audio sampling.
  • I reviewed FFT concepts and how they can be applied to audio signals for future processing.
  • I analyzed past projects utilizing ESP32 for audio applications to compare best practices and potential challenges.
  • I drafted a MicroPython program for initializing GPIO36 as an ADC input for the MAX4466 microphone.
  • Configured 12-bit ADC resolution (4096 steps for high precision).
  • Selected 11dB attenuation to allow full 0V–3.3V input range, ensuring we capture the full amplitude of the microphone’s signal.
  • Implemented a simple moving average filter to reduce noise and stabilize readings.

example2
Results:
  • I successfully drafted an initial MicroPython script for ESP32, which will:
    • Initialize GPIO36 as an ADC input.
    • Read microphone output at 12-bit resolution with 11dB attenuation for full signal range.
    • Apply a moving average filter to smooth noise before further processing.
    • Print ADC values to the serial console for debugging.
  • My expected results from testing in the lab next week:
    • ADC values should fluctuate between ~2048 (silence) and higher/lower values (sound detected).
    • When speaking, clapping, or playing a tone, the ADC readings should change in real time.
    • The moving average filter should reduce random noise but still allow responsiveness to sound changes.
    • If necessary, adjustments to attenuation levels or additional filtering will be made to optimize performance.
Learning:
  • The MAX4466 microphone outputs a DC-biased signal (~1.25V), so choosing an ESP32 ADC attenuation of 11dB (0-3.3V range) ensures the entire signal is captured without clipping.
  • 12-bit resolution provides precise readings, which is crucial for accurate audio signal processing.
  • Noise suppression techniques (such as moving average filtering) help stabilize readings but should be balanced to maintain real-time responsiveness.
  • Real-time audio processing challenges include ADC noise, quantization errors, and environmental interference, all of which will need tuning during lab testing.
Contribution to Progress:
  • Completed a functional ADC initialization script that serves as the foundation for real-time audio processing on the ESP32.
  • Conducted in-depth research on ESP32 ADC best practices, signal processing, and noise reduction to ensure the microphone’s signal is handled properly.
  • Established a clear testing plan for verifying ADC performance before moving into FFT analysis and LED visualization.
Next Steps:
  • Test the ADC script in the lab next week by:
    • Connecting the MAX4466 microphone to GPIO36 (ADC1_CH0).
    • Running the MicroPython script and observing real-time ADC readings in the terminal.
    • Checking for signal stability, noise levels, and responsiveness to sound input.
    • If necessary, adjust attenuation or filtering parameters to improve signal clarity.
  • Once ADC readings are stable, begin implementing FFT processing to extract frequency components from the microphone’s audio signal.
  • Document testing results and refine the code based on real-world performance.
Citations for this Session:


Entry 2: -----------------------------------------------------------------

Date: January 30st
Start Time: 1:30pm
Duration: 3 hours

What I Worked On / How I Worked On It:
  • I set up the power supply, oscilloscope, and multimeter at our workstation.
  • I researched the MAX4466 microphone pinout and features to understand its built-in amplifier.
  • I built a basic test circuit to visualize the microphone’s output using the oscilloscope.

  • example2
  • I researched the ESP32 microcontroller pinout and determined GPIO36 as the optimal ADC input pin for the microphone.

  • example2
  • I collaborated with Kyle to rewrite and reorganize our Project Functional Description for the website homepage.
  • I confirmed access to the team’s GitHub repository and successfully cloned it onto my laptop.
Results:
  • Successfully visualized the microphone’s output signal on the oscilloscope.

  • example2
  • Verified that the MAX4466 does not require an external amplifier due to its built-in gain.
  • Identified GPIO36 on the ESP32 as the best ADC pin for microphone input.
  • Reworked and refined the Project Functional Description to better encapsulate our project’s specifics.
Learning:
  • The MAX4466 microphone has a built-in amplifier and outputs a DC-biased analog signal, making it suitable for direct ADC input.
  • The ESP32 ADC1 pins (especially GPIO36) are the best choice for audio input, as ADC2 conflicts with Wi-Fi.
Contribution to Progress:
  • Verified hardware compatibility between the MAX4466 microphone and ESP32.
  • Ensured that the audio signal is strong enough for ADC sampling, eliminating the need for additional amplification.
  • Established clear documentation and repository access for streamlined team collaboration.
  • The team now has a refined functional description, strengthening our website’s messaging.
Next Steps:
  • Draft preliminary code for audio GPIO input to confirm communication between the MAX4466 microphone and ESP32.
  • Implement ADC sampling and data visualization to verify signal integrity.
  • Begin FFT implementation research to process audio data for frequency analysis.
  • Test real-time audio processing to prepare for LED visualization integration.
Citations for this Session:


Entry 1: -----------------------------------------------------------------

Date: January 28th
Start Time: 9:30am
Duration: 2 hours

What I Worked On / How I Worked On It:
  • I participated in an all-hands meeting to discuss project updates and direction.
  • I worked with the team to rewrite PSDRs to incorporate instructor feedback, ensuring clarity and precision.
  • I helped add specific numerical values to define the hardware scope.
  • I collaborated with the team to draft stretch PSDRs to outline possible extended features.
  • I created a hardware request form for an analog microphone, ensuring we have the necessary components for testing.
  • I researched how an analog microphone can be used for light output, exploring signal processing methods for LED visualization.
  • I outlined the programming structure needed for audio signal processing and LED control.
Results:
  • We now have improved the clarity and precision of PSDRs based on instructor feedback.
  • Our team has clearly defined hardware scope using numerical values, preventing ambiguity in requirements.
  • The GTA fulfilled my hardware request for the analog microphone.
  • I gained a better understanding of the software requirements for converting sound input to light output.
Learning:
  • I learned that well-defined PSDRs are critical for ensuring alignment with project goals and instructor expectations.
  • Analog microphone output processing requires a structured approach to signal capture, amplification, and mapping to LED output.
Contribution to Progress:
  • Strengthened project documentation (PSDRs), making requirements clearer for the team and the public website.
  • Took steps to acquire the necessary hardware for testing.
  • Laid the groundwork for software development by outlining the programming structure.
Next Steps:
  • Finalize hardware selection for the microphone and related components.
  • Begin prototyping basic audio-to-light functionality.
  • Research and implement signal processing techniques for a clearer and more responsive light output.
  • Continue refining PSDRs as project scope evolves.
Citations for this Session:


=============== Week 2: =================

Entry 3: -----------------------------------------------------------------

Date: January 23st
Start Time: 3:30pm
Duration: 1 hour

The group discussed our progress and addressed any challenges we faced individually and as a group. We began by catching up on what each member completed since our last meeting, ensuring everyone was aligned with the current state of the project. The discussion was open and collaborative, allowing us to share ideas and identify shortcomings in our work thus far. By acknowledging these gaps, we brainstormed potential solutions and strategies to overcome them.

During this session, I also focused on moving my Week 1 and Week 2 journal entries from my personal doc into the project website. Additionally, I wrote summaries for both entries, ensuring they were concise and met the requirements for the Brightspace submission due tomorrow. This task allowed me to reflect on the progress we've made so far and the challenges we've tackled as a team.

Entry 2: -----------------------------------------------------------------

Date: January 23th
Start Time: 9:30am
Duration: 3 hours

Worked by myself this morning on the A2 documentation. Before this session we all delegated sections of this document for each of us to do on our own. This is my work session completing all of that writing. Here is what I completed:

Functional Statement:
- Drafted and finalized the functional statement for the Smart Home Adapter for LED Lights (SHALL). This section clearly defines the device's purpose and functionality, focusing on what the product does rather than how it achieves it. The statement highlights the SHALLs ability to provide advanced customization for LED lighting systems, including manual and automatic adjustments based on sound and environmental conditions.

Functional Block Diagram:
- Developed a detailed functional block diagram to represent the primary components and functionality of the SHALL. The diagram includes key elements such as the power supply, ESP32 microcontroller, environmental sensors, microphone module, TFT LCD display, mobile app interface, and LED light output. Each block was thoughtfully placed and described to visually communicate how the components interact without diving into implementation details.


example2
Computational Constraints Documentation:
- Documented the computational constraints for the SHALL, focusing on the tasks performed by the ESP32 microcontroller. The primary tasks include processing audio signals using an FFT algorithm to map amplitudes and frequencies to light intensity and color, as well as controlling the LED light outputs for smooth and dynamic visual effects. Emphasized the real-time processing and memory constraints necessary for seamless operation. The documentation also addressed task prioritization, timing precision, and efficient memory management to ensure reliable and responsive system performance.

Next session will be to meet with the team and discuss our progress on the A2 document.

Entry 1: -----------------------------------------------------------------

Date: January 21th
Start Time: 9:30am
Duration: 2 hours

The class began with a brief discussion among teammates to catch up on how their weekends went. Establishing a friendly connection with my group early on is important to foster collaboration and teamwork. Afterward, I reviewed the to-do list tasks for the week to understand our objectives.

Following this, the all-hands meeting was conducted, during which Professor Walters provided an overview of our responsibilities for the week. I then focused on drafting the functional description section for the A2 document. This task required significant team discussions to clearly define the scope and functionality of our project.

Later in the session, Professor Walters and the teaching assistants visited our station to review our PSDRs. While they were generally approving, they provided constructive feedback to refine and improve our project specifications.

For the remainder of the lab, the team worked on submitting part requests and delegated sections of the A2 assignment to ensure timely completion.

=============== Week 1: =================

Entry 2: -----------------------------------------------------------------

Date: January 16th
Start Time: 3:30pm
Duration: 1 hour

We are meeting before lecture today to finalize our PSDRs. We have a couple new good ideas for working with computer vision via a Jetson NVIDIA micro. We also are organizing our team’s website and are making sure all of our journals are posted successfully before Fridays deadline.

Entry 1: -----------------------------------------------------------------

Date: January 14th
Start Time: 9:30am
Duration: 2 hours

Got a tour of the lab, to get our bearings of the workspace. We then met with all of the graduate TAs to understand their background and expertise; in return, we gave them an idea of our engineering specialties and our project idea. After that we returned to our workstation and began planning out the first week and delegating team roles on the whiteboard. I worked with Kyle on assignment A1 and finished at the end of class.