First 250 days of software dev - Part 2

15 min read

Day 11

I think today was also productive. I also think that the topics were consolidated a little more with many examples on objects and arrays. 11 days have passed and I have started to form my ideas on many subjects.

I found a website today in the time I have left from my duties: latentflip.

On this site I found some code that I wrote myself and some code that I found with setTimeout and other similar code structures, I ran it and looked at how it works. I got a little bit more ideas about call stack, callback queue, and web api's and how asynchronous structures work. I especially looked at examples related to async...await and after the array-object topics, I wanted to work on them a lot, but there was not much opportunity. I'm going to try to complete the last assignment tomorrow, because Saturday evening is the only free time I gave myself.

Day 12

  • Today I did the given task first with functions and then with classes. I used static and constructor methods separately, sent an error message with throw, used functions like ".find, .filter, .foreach, .map, .toString, .includes" and did operations on objects and arrays. I reinforced the information on the subject.
  • Afterwards, while researching concepts such as Ajax, Xml, Json, I went over what Ajax is. With Ajax, it is possible to send a request to the server without refreshing the web page. Data can be received from the server and operations can be done on it. This allows for a more 'responsive' and faster user experience. Nowadays, JSON is generally preferred over XML because JSON is a part of javascript and this provides a 'lighter size' (When I try to translate some words into Turkish, a lot of time is lost, so I write like this, otherwise I have no intention of murdering the Turkish language :)). Both JSON and XML are used for packaging information. XMLHttpRequest object is used for communication with servers.
  • Before looking at how to make an HTTP request, it seems necessary to look at what HTTP is. Hyper text => linked word. HTTP is designed for communication between web browser and web server. It is a stateless protocol. This means that in the same connection, the server does not keep any hyperlink between two requests. The use of HTTP cookies and stateful sessions is allowed. Client-server protocol. This means that the request is initiated by the client (i.e. web browser). During the exchange of messages between client and server there are proxies. These can temporarily hold HTTP responses (caches) and can be used for subsequent HTTP requests. Proxies are used for a few other things but that seems to be enough for now. The HTTP flowchart is as follows: 1. TCP connection is created. 2.HTTP message is sent. 3.Read the response sent by the server. Since the processes related to creating the HTTP request and processing the server response are a bit too detailed, I'll skip them for now. Thinking that it will come up again in the future, I think this research is enough for now.
  • I found two sites that draw road maps with very good explanations about SEO. I want to accept both of their explanations as a reference and finish this process. SEO topic of the day: Core Web Vitals.
There are three important issues, followed by other topics: Largest contentful paint (LCP)-Loading-, First Input Delay (FID)-Interactivity-, Cumulative Layout Shift (CLS)-Visual Stability-. There are also issues such as mobile compatibility, safe browsing, HTTPs, and Intrusive Interstitials (which means something like an unexpected crack, I can make sense of it but I can't translate it). According to Google, especially the first three directly affect the ranking. And google penalizes good URLs more than rewarding bad, bad URLs (such as URLs with low CLS scores). Here's the thing: there is not always a correlation between page speed and ranking. So google doesn't just look at page load speed. Google is trying to measure the actual user experience (user experience). For a good user experience, the LCP should be 2.5 seconds or less. Element types in LCP: <img>, <video> block-level elements, element with background loaded image (with URL). I put our site back in one of the webpage tests and looked at the optimization suggestions it gave again and came across suggestions related to the live support section, but I could not come up with much of an idea. I know I'm at the beginning, but I think I'm trying my best. While I was browsing again, I came across things related to the site and the use of Alexa, but I couldn't quite understand it yet. I need to do more research, learn, practice. I feel very hungry for knowledge and I wish this hunger will continue for a very long time.
That's all for today.

Day 13

  • This morning I looked a little bit at how the web works and then when I was given the task I concentrated on it. I thought the question was very good and there were some things to think about. I thought a bit too much and wanted to use constructs that I hadn't used for a few days. I think I used a bit too much trial and error. For example, I hadn't used the switch case structure for a long time. Obviously it wasn't difficult, but it was necessary to set up the algorithm well. Of course, my code had its shortcomings, but in general, I enjoyed what I did, yes.
  • I started to look at XML, JSON topics later. Yesterday I was already familiar with these topics while I was researching AJAX. But today I learned how to use JSON. JSON is useful for communication between the web server and the client. And it is a completely string structure, that is, it is a text-based format. What I really learned about JSON today was the ".parse()" and .stringtify()" methods. For example, a JSON object is very similar in structure to a javascript object, but there is a big difference. One is an object, the other is just a text (string). So it is not possible to see a JSON object as a javascript object. To return it to an object, the ".parse()" method is used. If the opposite is desired, that is, if we want to turn the object into a string, the stringify method is used. So why do we need JSON or XML? Because the data exchange between the client and the web server does not use a direct javascript object; it uses JSON, a text file. I examined examples of the parse and stringify methods and printed them to the console myself to see the differences between them.
  • Obtaining JSON: For this we need to use an API: "XMLHttpRequest" or XHR for short. It is a Javascript object that allows us to make a network request to get resources from the server through Javascript. This allows us to update the page without refreshing the page. I saw a detailed and at the same time simple code structure for obtaining JSON and calling request response. I read it, I think I understand it.
I also looked at SEO a bit, but unfortunately it's not detailed enough to add here. I will try to make up for it tomorrow.

Day 14

Today I looked at JSON, XML, API, DOM concepts in general.
  • API
    Application Programming Interface. Unlike a user interface, an API is a set of rules and features within a piece of software that allows interaction between software. A web developer (or other developers, because it's not just a technology used on the web) can use it in their application to interact with the user's web browser. For example, with the getUserMedia API, audio and video can be captured. Geolocation API can also be used to get location information. Most APIs are only valid in a secure context (HTTPs). The reason is that major attacks can be caused by using APIs. The main purpose of HTTPs is to prevent MITM (man in the middle attack) attacks from powerful APIs. Since some APIs are powerful, they provide a great advantage for those who attack the system. With these attacks, attackers can gain low-level access to the user computer. Or they can access user data. And user privacy is invalid. For non-local resources to be secure, they must be in the form of "https:// or wss://URLs".
  • DOM
    Document Object Model. It is an API for interacting with XML and HTML documents. DOM is a document model that is loaded in the browser and presents the document as a node tree (node tree, I've seen a similar tree structure when I was studying data structures). DOM is one of the most used APIs on the web. In the document, it allows the code to interact with and access each node and allows the code to run in the browser. In other words, DOM connects the web page to the script or programming language by providing the structure of the document. With the DOM we have a logical tree. Each branch of the tree ends at a node and each node has an object. The DOM method gives us access to this tree and allows us to change the structure, style, and content of the document. Brief description: DOM is a data representation of an object on the web that contains the content and structure of a document. It represents the document as nodes and objects, which allows the programming language to interact with the web page. DOM is not part of JS and is a web API used to create websites. Node.js runs on the computer but provides the APIs and the DOM API is not a core part of Node.js. So DOM doesn't depend on any programming language and provides a structured representation of the document with a single API.
  • XHR
    It changes asynchronously or synchronously. I learned that we should use synchronous outside web workers, but I haven't researched that part in detail. But there is something I learned: We should definitely choose asynchronous for performance. Synchronous blocks execution and causes 'freezing'. That means unresponsive user experience.
    • Today I tried to learn how to do operations with XMLHttpRequest. I think I understand the places I looked at. In one example, it followed some steps to create a request and performed the operation: The information from the API was accessed with a URL. In the AJAX part, the response type (JSON), the event handler part (for example, when XHR is OK, what to do when the server says 200 OK! Using two parameters (GET, 'destination), the request was opened and sent. The example was this: There was an API on a web page and it would print similar words that rhymed with a word entered by the user in the result section. What was the purpose of the example? No matter how many times I did a word search on the generated site, there was never a page refresh while the results were being printed. So I think Ajax worked for this.
    • Towards the end of the day I got into HTML, CSS. Obviously they have much simpler structures, but it will take some time to learn without practice. The best thing I learned today was something you said about the subject: There are 3 ways to create CSS. If it is in HTML, our CSS file changes according to the place of use as inline and internal. If we create another CSS file in the same folder as HTML, then we call it external CSS.
Tomorrow I'll pick up where I left off.

Day 15

Today was all about studying, practicing and researching CSS and HTML. I have to say that I had a hard time when I suddenly switched from Javascript to these subjects. No matter how familiar I was with HTML, getting into design with CSS was something I was unfamiliar with, so I wanted to start from the basics and learn a lot quickly.
  • I finished most of the topics related to HTML, HTML Forms was the only one left, but I didn't go into it a little bit because I worked on it a month ago and the main thing I had to work on was CSS. I completed the others.
  • I was able to start from the most basic concepts in CSS and come up to Flexbox. I have examined many examples, played with them and designed a few sites, but if you say how much you have learned; If you say add 3 and 2, I will do about the subject. Because I learned the definition of 3 and 2 and how to use them today. But if you say sin3°+sin2°=?, since I don't know the meaning of sine and degree, I need to research what they are, look at their examples, look at them and produce solutions on the problem myself. In other words, I may not be able to do it on command, but I can do it after researching and learning how it is solved. This is how it was briefly today.
  • I saw many different terms such as flex, inline-flex, flex-wrap, relative position, justify content, the difference between block-inline, the placeholder property of the # sign for the URL in the href, hover, align-items. I practiced with these terms. But I know that it will be more memorable if I am given a single task and write HTML and CSS one by one by doing a search on it. So today I tried to lay a little foundation, and tomorrow I will practice more, if we are going to continue working on this subject.

Day 16

Today I looked at different topics: CSS, HTML, XML and DOM, and regex.
  • In CSS and HTML today, I completely looked at and examined the design of our website, the codes within the site. I tried to understand why each code was written, what function each of them had. Of course, it seemed very complicated because what I was looking at was the result of a great work. So I tried to examine each element step by step. Some places seemed unnecessary at first, for example, I thought that a code in the navigation bar was useless at first. But then, when I reduced the size of the browser, I saw that code had a function. I saw that responsive web design is not easy. There were traces of Bootstrap in some places, for example, 6 div elements in the footer section caught my attention. Until noon, I tried to make some parts of the header part by simulating it myself. Did it work? No, but I think it will, because it's not very difficult, but it's very challenging. I realized that I liked algorithm-related subjects more.
  • I should come to XML and DOM. At first I tried to understand the question. Then I thought, why should I bother when there are already features like DOM Parser for this? First I looked for the easy way. I found out that there is already a function for this on the internet, but since the question asked me for an algorithm to extract meaningful objects from a string text, I broke away from Google and thought about the algorithm. I had built an algorithm that I thought was quite logical and started working on it. Then, when I saw a different solution with your approach, I put mine aside, but tomorrow I will work on how I can apply my own thinking while analyzing your solution.
  • Since you said "expecting you to do it with regex is like asking you to throw a missile", I became more interested in regex and then I always looked at what regex is, what it is used for, and its examples and uses. I found some of the requests in the xmlString text with regex. I couldn't find a few places I couldn't find, but as you said, I guess it is not necessary or available to find everything with regex. Still, I think it was nice to work on it and write a few things.
That's all I can tell you today.

Day 17

I did two things today: Understanding the algorithm for extracting a meaningful DOM object from XML and refreshing the old Javascript topics we covered while reading the code you write and working on CSS. I don't think there is any problem in HTML, but I confuse the concepts in CSS. That's why I had a hard time designing a website today. At the end of the day, at least I was able to make a responsive header. Since the subject is very much about visuality, I will watch videos on it at the end of the day and tomorrow and try to do something myself. That's how I can summarize today. I will do my best to make up for the things I am missing.

Day 18

Today was also spent with CSS and HTML. Even though I dove into Div's a lot after Inspect element, when I went into Sources and looked at JavaScript codes, even if I didn't understand it, I liked it a lot. For some reason, those topics seem much more appealing. Yes, let's come to today:
  • I did some research on the internet for Carousel. I found a code that works with JavaScript, but I didn't want to mess with it too much, so I just did that part with CSS and HTML. Most of the day I just wanted to look at the design of our website and how it was structured, spend my own time and do it differently. It was different, there were parts of me again. But it was a bit of a problem for me that the width of the body part was not as wide as the page width. Then it seemed to me that the right things are slipping or will slip into complete copy-paste, but if there is time, I will try to do it myself again.
  • The most important thing I learned today is that practicing is necessary. And I did a lot of it today. The design work I didn't like at all two days ago, I liked it even more when I was able to do a little bit of it today and I wanted to do more.
  • It will be very unrelated, but a very nice game site about flexbox has been recommended by many people on the internet, to understand the subject. I want to wake up early tomorrow morning and finish it. I woke up early today and watched some videos on the subject. They were also very helpful.
That's not really all I could share today. But I want to write more fully tomorrow, since it will probably be with CSS again.

Day 19

Today I completed the HTML and CSS parts of many sections of the website. It was not exactly the same. There are even a lot of differences, but it was an opportunity to examine and reproduce the design of a website, how the structure of the site is, especially on our website. I think I made a lot of flexbox examples. Although I am still a little confused, I learned a little bit about what the concepts are used for, and the way they are used by practicing a lot. I also did exercises from the site I linked yesterday.
I tried to create something different or add something of my own, such as a carousel, a timer that automatically scrolls the elements inside, for example, I created something similar to our chatbox that grows as a pop-up.
Today I spent a lot of time practicing and it felt really good. It was very productive as our goal was to understand the site structure and the elements used.

Day 20

Today, first of all, I must say that if you hadn't given that algorithm structure, things would have been a bit more difficult. Actually, you asked the question and you gave the answer, I'm sorry about that. I would have preferred to spend some time on it and learn it. But it was fast to reach the solution in this way, of course. First of all, we had a JSON String. After converting it into a readable object with a "parser", we could make changes on the website with it and the JavaScript code we would write. Which is what the assignment was about. In other words, our goal was to scan the data in the JSON object, match it with the appropriate parts of our website that we wanted to match and make the changes we wanted. The algorithm we need to set up: Scan each element of the JSON object for a particular property. Do this by creating a loop. Then include the part of the desired HTML document in the loop and update the values for each element. The part I had difficulty here was not a JSON object, object, array, or html. The problem I had was finding HTML elements through JavaScript. This is the biggest thing I learned today: To be able to find HTML elements with their defining properties like id, class, tag and to be able to update on them. The first part of the day was spent making updates on the site I'm trying to create similar to our website. I completed the missing parts in the header section. I can list what I'm missing, but there wasn't much time to complete them: body width, the correct adjustment of the elements on the right side of the carousel, places where I still need to make a few edits (such as the popular categories section), the options that appear in the quick search section, the menu bar that is fixed up when the page is scrolled down, and the most important missing is responsive design. For example, when the page size is reduced, the design on the original page does not appear. There are some places, but the majority is a bit problematic. While talking about so many shortcomings and depressing, let's come to the pros. After all, you need to make prons&cons to be realistic. One of the pluses was this: I really learned how the page structure is in a basic sense. I encountered many of the HTML elements, many of the features used in CSS and used them in many places. Even if I still get confused when using flexbox-related terms, I can find the solution immediately. I think it will improve even more with practice. For example, today I provided access to the live support pop-up box by linking to it within the site. Yes, it was easy, but it was nice and simple to do. In general, I think these are simple things, but they take time and effort. I think this is also very normal for our level.

Quote of the day:

" A ship in harbor is safe, but that is not what ships are built for. "- J.A Shedd