Thursday, October 8, 2020

Fun with Docker

I'm starting a new project for work soon. Instead of developing from the ground up, we'll be using an existing open-source solution and customizing it to suit the project's need. I figured this is going to be more devops and less coding as we'll mostly be doing modifications on the existing code and decided to play around with Docker since I don't have much experience writing Dockerfiles or docker-compose files.

I took one of my side projects I created with a node.js backend and react frontend and decided to "dockerize" it. I decided to start with the front-end for now since the backend requires Redis.

# Use the official image as a parent image
FROM node:lts-alpine

# Set the working directory.
WORKDIR "/adventure-capitalist"

#Copy the file from host to current location
COPY package.json .

# Run the command inside your iamge filesyste.
RUN npm install

# copy the rest of app's source code from host to image files
COPY . .

# Add the metadata to the image to describe which port the container is listening to 
EXPOSE 3000

# RUN cd /adventure-capitalist/src/backend/data && node app.js

# Run the specified command within the container.
CMD [ "npm", "start" ]

After writing this file, I built it using docker build --tag adventure-capitalist:1.0 .

Docker builds the image and after building it, I run it in a container using: docker run -p 8000:3000 --name ac adventure-capitalist:1.0 


This should do it right? I'll go to localhost:8000 and should see my front-end right? NOPE I logged the container with the command docker logs ac, and didn't see any issues. Decided to ask @manekenpix for a second pair of eyes + google "dockerizing a react app" and found out why it wasn't working . I needed to add the -it flag otherwise it won't work:


So the full command I had to use was: docker run -it -p 8000:3000 --name ac adventure-capitalist:1.0


References:

https://mherman.org/blog/dockerizing-a-react-app/

Friday, September 25, 2020

Padding your Github stats

These past few days I got to have some fun reviewing github repos in @humphd's OSD600 class (I made sure I didn't take all the issues). I primarily focused on Python and Javascript/Node.js repos as I'm most comfortable with them. It was actually really fun as I got to suggest some things they did not think about or things I learned while attending OSD600/OSD700 from a year ago. I must say though, these students are way better than when I first started OSD600... Extremely looking forward to their contributions for Hacktoberfest and even more so if they decide to contribute to Telescope

Monday, September 21, 2020

Pre-September

 Over the summer I was employed as a research assistant for Seneca on an NLP project. Now that we're wrapping up and with October coming around I decided to play around with Golang and look for some projects to work on aside from slaving away contributing to Telescope. My previous professor linked the repo for the backend of the Canadian COVID app and I found the perfect issue. If successful, I'll have contributed to something used across Canada...?




Friday, June 5, 2020

Attempt at Creating a Clone of Adventure Capitalist

After about 3 weeks working on this project, I'm kind of done. Built with React/Node/Redis/SocketIO I learned a lot. The reason I say I'm kind of finished is because unless I overhaul the whole backend of the code, I don't think I can get it working 100%. ... I know it looks awful haha.


The project was fun and challenging, there weren't any guidelines on how the project should be built aside from it being written in Javascript/Typescript. I initially tried using Typescript, but it gave me headaches with the difference in import exporting. This is something I'll probably need to learn more about.

You can check out a version of a working game here (not mine). The hardest part about creating this clone is regarding hiring a manager. Hiring a manager takes care of clicking for one of these shops. The number on the right shows the "cooldown" of the button before it can be pressed again. A manager will auto this process so whenever the shop is off cooldown, it should be clicked, the timer is actually reflects how much cooldown time is left, there should also be a progress bar providing a visual representation.

There were 2 issues to think about:

  1. The initial max cooldown of the Lemonade shop is 500ms, when a player purchases a certain amount of the shop, the max cooldown is actually halved or and it becomes exponential with each threshold reached. So potentially this shop could be firing a request every 7.8ms (500 / 64), what tool should I use for this?
  2. How should I manage the auto clicking by managers?
  3. The managed shops should also be running even if the window isn't open, players should be informed of how much cash they have earned while the window was closed
I looked around a bit and decided to use websockets, specifically Socket.IO. I thought using the traditional HTTP/GET request would destroy the backend since there could be a ton of requests being sent.

The second issue I kept thinking of was how to create the auto function for managing a shop + keeping track of how much time was left AND having all this be reflected on the front-end. After thinking about this for a few days and getting nowhere, I reached out to @humphd who suggested using the TTL(time to live) + Pub/Sub functionalities of Redis. This was pretty cool as it had me researching about keyspace notifications for Redis. That's all for now... I may blog more about this later.

Wednesday, May 6, 2020

Typescript + Linters

Taking a small break from Telescope until the summer semester resumes. I've started collaborating with a elementary school friend on a project to build a clone of the game Adventure Capitalist. After working with Javascript for so long, I decided to try doing this in Typescript. It went pretty well up until I had the following line of code:

const index = this.shops.findIndex((shop: Shop) => shop.name == shopName);

When I was trying to compile my code, I kept getting the following error

Property 'findIndex' does not exist on type 'Shop[]' 

Pretty sure this should work as shops is an array of type Shop. As a developer usually does when they run into issues, I started googling the problem and checking Stack Overflow. It recommended I change my tsconfig.json "target" to es2015 and the findIndex() is an es6 function and add es6 to "lib". I did all that and tried compiling, still no good. I reached out to my frequent collaborator from Telescope @manekenpix and he suggested I just try running the code. It works?

Turns out it was a linter issue, although it still compiled properly. Upon further research 2 hours later, I realized I was using the cli command wrong, or at least the way I was using it was going to cause errors. I was compiling my .ts to .js by using the command tsc index.ts instead of tsc, when a specific file name is used, it will disregard the tsconfig.json file settings and just try to compile your Typescript to Javascript. So I tried running 'tsc', it worked! No errors and it was outputting all the compiled .js files inside my /build folder (ignored in .gitignore) I specified in my tsconfig.json file.

Thursday, April 30, 2020

Data Structures and Algorithms

I'm finally done all my courses and since the job market isn't that great right now, I have taken a different approach. Instead of working personal projects or contributing to open source, I've decided to brush up on data structures and algorithms for a bit.

One thing I found lacking for Seneca's Computer Science related programs was the science portion of Computer Science, maybe it was because I was enrolled in CPA and not their BSD program.

For CPA the only course that deals with data structures and algorithms, DSA555 is offered as a professional option. After taking the course I noticed why, as a pretty smart person in the class said "If this was a mandatory course, a lot of people would be dropping out of the program, it was pretty hard". I still wish there was another similar course or two offered so we could learn more about analyzing the run times of more complex functions and graphs.

I took DSA555 last winter and have more or less forgot how to implement or how most of the things I learned in the class work: linked lists, trees, different types of searches + sorts. So now as I am typing this blog, I am solving and looking at problems on leetcode.

A friend of mine currently works for Tesla and is looking for a new job. Most of the places he's been interviewing at for a full stack position have also asked him data structure and algorithm questions on top of questions involving joining two tables in SQL or how to mock a request for testing.

I think this is fair as it makes a developer conscious of the code they write and makes it easier to recognize patterns and respond accordingly.

For example, say I have an array of sorted numbers and I have to find if a given number exists:

I could loop through the array and check each element
Or
I could check if my given number is the middle element in the array. Depending on if it is bigger or smaller I can use the upper or lower half of the array and repeat the same steps, until the number is found or not found.

The second option sounds tedious, but depending on the size of the array, it may actually turn out to be faster than the initial option.

It also allows developers to think about the function they are writing performance-wise. Is it a O(n) solution? O(n^2) or worse O(n^3)? If it is the latter two, can I improve the run time of it? For personal projects this may not matter as much, but if you are working on software or systems that will be used by millions of people or contains a ton of data, these little things start to add up!

Thursday, April 23, 2020

OSD700 Post 1.0

When our prof doesn't have to review our feeble PRs, he's on a mission.


Tuesday, April 14, 2020

OSD700 Release 1.0

We've finally hit 1.0 for Telescope. What a journey.

I finished up the issues I was still working on from 0.9 and worked on a few more:

PR #919: Feed Type Should Support Delete, Modify (Delete + Create)
Overall, I did not expect the whole "add feed" feature to be this big. We had 4-5 people working on this. 3 on the back-end, 1 on the front-end and our prof helping out on both ends. Happy to say we have it working. My PR was working as expected, but it wasn't doing great performance-wise due to multiple Promise.all() and awaits, with help from our prof, we were able to get rid of a lot of them. I learned if you want to trigger our prof, wrap a Promise.all() within another Promise.all().

PR #931: Add a Way to Receive Updates when New Posts are Available
PR is done, I'm beginning to regret mentioning I wanted to work with React.

PR #937: Finish Search Feature
This PR was possible because I learned a few things from reviewing a PR by @Silvyre. Seriously, if you want to be a better developer, look into doing code reviews, you'll widen your perspective.

PR #989: Teach tools/autodeployment/server.js about a release to master
Our staging box auto deploys itself whenever there is a change in the master branch. We wanted to change it up for production so it will only auto deploy itself whenever there is a tagged release. One of the things I have neglected during my years at Seneca is scripting or just using linux commands. I've started on the path of slowly redeeming myself with this PR.

PR #993: Remove "feed added successfully" after some period
Front end change, this uses the SnackBar component implemented in #931 to display a toast informing the user if their feed has been added

Wrapping Up
I started this journey in OSD600 where our first assignment was to create a notepad app (I did the bare minimum) and trying to enhance or debug other students' implementation of it on Github, I even remember I had issues writing the notepad app. Back then, I could not imagine contributing to building a project from the ground up (Telescope)... I could barely fix an issue a week to keep up with Hacktoberfest during that time.

Now that OSD700 is about to wrap-up, I've noticed how much my attitude and skill changed:

We have to implement a new tool for Telescope? No problem, time for some experimenting.
There's a new component that needs to be built? Front-end huh... but I'm game.
New issues dealing with a tool that has been implemented but I haven't used yet? Pick me!
Nginx issues? Oh god. where's @manekenpix to hold my hand through this?

Overall this course and project was extremely fun. We worked like a team you would find in a workplace, we had dev-ops, back-end devs, front-end devs. We got to explore and experiment with different tools we normally wouldn't get a chance to work with all at once: ElasticSearch, Redis, Kubernetes(lol), Gatsby, GraphQL, Nginx, Docker, Jest, SSO. It did not matter if we were not able to implement them in the project. Just the opportunity to experiment with it has improved my knowledge with it, which I am very grateful for.

I think why these open source classes were so fun was because all the things we were doing were open ended. These were real world issues, there is no answer guide for us to take a look at when we were stuck. Sometimes even our prof was reading API or tool docs to understand and help us out when we were stuck on our PRs.

Lastly, I do not think this would have been possible without our prof and lead @humphd, he's a monster. Deployment, back-end, front-end, authentication, system + architecture design, he is knowledgeable in it all. He can and will request changes on your PR which you spent hours working on. As of writing, we have ~468 closed PRs, of which he has probably reviewed close to that same number of them. Thank you for guiding us.

In no particular order it was a blast working with you guys. Thank you @manekenpix, @cindyledev, @Silvyre, @agarcia-caicedo, @grommers00, @miggs125, @lozinska, @yatsenko-julia

Other students at Seneca, if you have a chance to take OSD600/OSD700. Please do. This is probably the highlight of your program.

Monday, April 6, 2020

OSD700 Release 0.9

This post serves two purposes. I was told to blog and this is to test PR #931 for Telescope.

I'd like to give a big shout out to frequent collaborator and contributor @manekenpix for always helping me out. Be it reviews, collaborating on issues or help testing. He has made contributing to Telescope a lot easier than it should've been.

Here's the PR list I worked on for 0.9 and will continue to work on for 1.0

PR #937: Finish Search Feature
This is a PR in progress

The search feature we have right now works, but it isn't polished. We currently only support one search filter, authors. Searching takes a while, but there's no indication of whether the results are loading or not. I added a spinner to fix this. I have to also have to add an endpoint we have to search through Post content and return post results based on the data we get back.

PR #931: Add a Way to Receive Updates When New Posts are Available
This is a PR in progress

This is pretty close to finishing. I added a timer that will fire off a fetch request to get the number of posts in Telescope. If there is a change, a non-intrusive alert should pop up for 6 seconds letting the user know there are new posts available.

This PR also led @manekenpix and I on a 3 hour wild goose chase to track down why we weren't able to get one of the custom headers we had for Telescope. This resulted in the following PR #934, that's right, if you check the PR it took us 3 hours to add/change 4 lines. It was fun, but so so frustrating.

PR #919: Feed Type Should Support Delete, Modify (Delete + Create)
When I get frustrated with the two PRs above, I like to work on this because the backend doesn't lie. I don't have to deal with stuff rendering or not rendering on my screen or go reading up on new hooks. I pray if I ever have to do front-end development in the workplace I'll only have to use UseEffect and UseState hooks.

Anyways, this PR provides the functionality of removing a Feed and having it also remove associated posts in Redis + ElasticSearch. This PR was enjoyable because it taught me the advantages of writing lots of tests. The function works, but I just need  to finish writing a test for it and it should be ready to be reviewed by our gatekeeper @humphd.

Other PRs I worked on were refactoring the layout.js component we had previously using class components to become functional components. Refactoring this didn't take so long and at the end of it, I realized I got pretty good at using the useEffect and useState hooks of React.

As we're nearing 1.0 for Telescope and with so many things left to finish prior to shipping. I present the following as nobody in the Telescope channel has started panicking yet.



Thursday, April 2, 2020

Developing with a Test First Approach

I spent this week kind of sluggishly finishing up PRs that were close to completing such as the version on the banner. We can now see what the latest commit Telescope is running now by clicking on the version.

Another one I finished up since last release was configuring Nginx to use recommended settings by Mozilla.

Yes, I am still neglecting Kubernetes...

However, today I'm here to write about an issue I just took on and started working on at 2AM this morning, Issue-908 and I think I am close to finish. This work involved Redis, which was nice as some of my earliest contributions to Telescope were Redis related issues.

For this PR I took a different approach, put a heavy emphasis on thinking about and writing tests, I quickly wrote new features that should have mostly worked and used the tests I wrote to aid in making sure they are mostly correct. Up to this point, I haven't really written much tests, I have maybe modified existing tests. So this time I made sure I wrote a ton of tests to cover situations and from there working on the new features so that whatever I am expecting and what I'm actually receiving match. 

The approach was refreshing, instead of having to console.log() things, the tests easily told me what they were doing or what the value being returned was. For example as part of this PR I had to create a function that removes the Feed and all its associated Posts, some people might write a test that adds two posts belonging to a feed, delete the feed and check whether the posts still exist in the database.

Here's what I did:
  1. Create the feed for the test, make sure the created feed has the same values I used to create the feed.
  2. Create two posts, make sure the created posts' data have the same values I used to create the posts.
  3. Remove the feed, make sure the removed feed doesn't return any thing, check if both posts are empty too.
Is this test perfect? Probably not, there may or may not be some edge cases I haven't though of yet. Is this overkill? I have no idea, but more is probably better in this case. Will I write more tests in the future as long as it is not in the front-end? You bet. With all these tests, if any of them start failing, I can probably pinpoint where the code went wrong, instead of playing a guessing game.

Be responsible, write tests!

Monday, March 30, 2020

The Importance of Taking Time Off

Ever since I've enrolled in the Open Source Development classes at Seneca, I've had a blast. I learned about using all sorts of new technologies, got to collaborate with people who are much more skilled at programming than I am, and I've had the chance to contribute to projects that seemed interesting. I could've graduated last semester, however the project and the idea of being able to learn and ship a product under the guidance of a very experienced professor convinced me to stay for another semester just for this course.

This isn't a post to say I regret my decision, far from it. The previous week prior to writing this post, I've been waking up at 8/9 in the morning and have been working on issues all the way til usually late in the morning of the next day (seems like this isn't unique for our class). But I started to feel kind of burnt out, the things I enjoyed doing just a week ago, I started to procrastinate on or not look forward to. I decided on a simple solution, take the weekend off and just enjoy time with things not related to Telescope. Do some exercise, go for a walk(I have no idea how advisable this is currently), spend time with the family or watch a movie. 

I think it helped. As I'm writing this blog, I am content with my routine of checking slack, opening up my laptop that hasn't been opened for a a few days, browsing through outstanding issues on Github, typing the commands 'docker-compose up elasticsearch redis' then 'npm start' and fixing currently stale PRs.

I think this is applicable to probably anything and not just my situation, if you're starting to not enjoy something, take a bit of time off, enjoy other things and then re-evaluate.

Sunday, March 22, 2020

OSD700 Release 0.8

This release was almost like 0.7, three weeks(kind of) to work on it. I worked on tackling most of the existing issues assigned to me as I had an outstanding ~15 issues that didn't have a PR yet.

All links are to their pull requests.

Issue 538: Expose Search Endpoint Via Web API
A throwback to working with Express.js, a previous PR went in to include ElasticSearch, we can currently use its client on port 9200, but we didn't really integrate it with Telescope, this PR was to fix that. I created a new endpoint now at https://dev.telesope.cdot.systems/query, we also now have a query parameter called search that accepts search strings of less than 256 characters. Thank you to @raygervais, @manekenpix, @cindyledev for the review.

https://dev.telescope.cdot.systems/query?search="search string here" should return blogs that contains whatever is entered after the "=" sign. Usually URL encodes spaces with %20 and I thought I would have to decode this, but surprisingly this was not the case and Express.js handles the encoding and decoding(I assume). I also added an error message informing the user if the user doesn't provide / provides an empty string for the search query parameter.

Issue 634: Nginx Configuration for Staging and Production
This one was awesome, I still have no idea what I'm doing with Nginx. But it was extremely fun to just collaborate on this with someone as we were trying to get this to work. Building on top of what @manekenpix had done previously to cache static files within Telescope, we're also caching all endpoints for Telescope. I forgot the exact settings we had, but hitting an endpoint on Telescope will now cause Nginx to cache the endpoint for the next while and instead of having to go to Telescope to get the requested page, Nginx will serve the cached endpoint until it is considered stale.

Assuming there are no cached endpoints yet. We can test it by doing a curl -I https://dev.telescope.cdot.systems/posts, there should be a response header with 'X-Proxy-Cache: MISS'. Visit the address in the browser, then use the same curl command again and this time you should receive 'X-Proxy-Cache: HIT'

Issue 648: Switch from in-memory to Redis-backed Session Management
This was a simple PR, switching away from a package we're using to a production ready package. It was simple until I realized, my PR was breaking a lot of our current tests. A suggestion from @humphd to use our current ioredis library fixed all these issues.

Issue 668 Compare Nginx Config with Mozilla Recommendations
Also Nginx related PR which I have no idea what I'm doing, except to use their recommendations in our nginx configuration file.

Issue 724 Add Site Property to Feeds and Redis
I think this PR is close, I just need confirmation if what I'm doing is a correct way.The feedparser-promised package parses all posts for the processed feed and returns a link in its metadata which is supposed to contain the url without the tags, for example https://c3ho.blogspot.com/feeds/posts/default/-/open-source should become https://c3ho.blogspot.com. But upon further testing, this is not the case and only feeds from Wordpress are working as intended. I wrote a simple function instead to just take the feed url provided by the user and do some regex to obtain the "link" foregoing the metadata link route as it is not consistent.

Issue 750 Make Search Bar Return Results
Building upon work I did previously to create a component for Author results, I now had to combine the GraphQL queries I worked on and the Author component so results will be displayed when a user types a string into the search bar. This was fun and stressful as it taught me more about React hooks, but tons of frustration on trying to get GraphQL queries and Apollo Client to work on the front end. In the end I wasn't able to get search bar to return results when the button is clicked, so I had to opt for it to dynamically return results as the user inputs text. Thank you @cindyledev for religiously reviewing all the later commits for this PR.

Issue 803 Include Version Info on Header Banner
This PR works, we've tested it locally with the commands npm run build and npm run develop and it has worked on several machines. I just don't know why ZEIT doesn't like it. We turned the version info on the banner into a link so when you hover over it, there's the SHA info regarding the commit it is on and clicking on it will bring the user to the commit in Github.

I keep saying this, but for this upcoming release, I'll be finishing up unfinished PRs and get Kubernetes working so I can tackle replacing our REST APIs with serverless functions.

Friday, March 13, 2020

Serverless Functions(Node)

Serverless functions are pretty cool, they take care of another worry developers might have: What if I get so many requests that it overloads my server? Simple, you don't. You let an almost trillion dollar company(Amazon) handle it. These functions are able to scale up and down depending on the # of requests all handled by AWS Lambda.

We'll be using the serverless package which makes settings up AWS lambda pretty simple.

Before we begin, make sure you have a AWS account and user created and provide the user programmatic access:
  1. Use the command: serverless config credentials --provider aws --key userKey --secret userSecret . Replace userKey and userSecret with the appropriate user information you created for the AWS account. 
  2. Use the command serverless create --template aws-nodejs --path folderName . Replace folderName with your choice, this will create a folder with serverless.yml and a file called handler.js
There's really two parts that make up the serverless functions, the serverless.yml and the corresponding .js file containing the functions. For this example I'll have a file called handler.js containing all my functions I wish to make serverless.

When trying to hook up the functions, we must have a few things:
  1. Define the functions in the file(handler.js)
  2. Make sure the available handler is available for the function
In my handler.js file I'll have the two following functions

hello() which returns the message 'Hi' and bye() which returns the message 'Bye'.
Here's how bye looks like

module.exports.bye = async (event, context, callback)  => {
  const str = `Bye`;
  return str;
}

The event argument contains information about other AWS services the function has gone through, if it went through a load balancer it will contain information about the load balancer. For more information about it, find it here.

The context argument contains information about the invocation, function and environment. For more information about it, find it here.

The callback argument contains information you want to send back in case of success or error, the callback actually accepts two arguments callback(response_error, response_success). Amazon provides documentation on how you should handle async vs sync callbacks here

Make sure both functions are exported. In the .yml file under the functions: section you want to create a section for each.
functions:
  hello:
    handler: handler.hello
    events:
        - http:
             path: users/hello
             method: get

  bye:
    handler: handler.bye
    events:
        - http:
             path: users/bye
             method: get
You'll notice we have the function name, followed by handler: fileName.functionName.

To push the code to AWS, we'll have to use the command serverless deploy -v, you'll have to push the code anytime you want the changes reflected on AWS

To call any of the functions to test on command line, use the command serverless invoke -f functionName

To test your app locally, we'll be using serverless-offline package. Once the package has been installed, we have to add the following at the end of serverless.yml:
plugins:
  - serverless-offline

Use the command serverless offline start to start it up locally. This uses default port 3000 and you should now be able to get 'Bye' in the terminal or console when you hit the route: localhost:3000/users/bye

This wasn't too bad. Now you can say you have knowledge of cloud based programming!

Source: https://hackernoon.com/a-crash-course-on-serverless-with-node-js-632b37d58b44

Planning for March Break

Most public institutions have closed or are preparing to close for the next week and now we're getting an unexpected March Break so there's some planning on what to do for the next few weeks while this happens:

Running:
Last October I ran the ScotiaBank half marathon(21 km) a goal of mine since I was young. I've been thinking if this year's half marathon goes well again, I'll try for a full marathon(42 km) and make qualifying in the Boston marathon an eventual goal. I thought I did pretty well finishing in a time of 1:45 for a half marathon until I realized to qualify for the Boston event you need to finish a marathon in ~3:03. This means I have to shave off 15 mins off my half marathon time WHILE running twice as long. I've got a long way to go. The improving weather and off time should allow me to start running earlier.

Boxing:
I've been boxing for ~6 years now and instructor for 2 years. This week or more off should probably allow me to take some of the classes instead of only teaching.

Teaching:
A friend of mine reached out to me around December to see if I would be interested in teaching programming to children(Scratch) and pre-teens(basic Javascript) once a week for 8 weeks starting in at a community center starting February. I'm assuming it went well as he asked me to teach HTML/CSS basics starting April. Time to plan how and what to teach kids about HTML.

On a side note, for anyone who wants to test quick things for Javascript like a quickly written function or to test some packages without wanting to install express package and all that stuff in Visual Studio Code, I highly suggest Glitch. They have an option for launching a basic web-page or a node-express app (allows users to also install packages), they also have a terminal to interact with your app and the option to import/export your code from/to Github.

Dogs:
I have two dogs, got to take them out for more walks or runs during the time off!

Programming:
During the first class of OSD700 our professor mentioned something pretty interesting that we could look into for Telescope, serverless functions. I'm not entirely sure if we'll have the opportunity to use it in Telescope, but I figured I'll look into it and if the opportunity arises, collaborate with another student who has expressed interest in it on and implement it.

Another is to learn a bit more about deployment: Docker and Nginx. I just kind of use Docker to get ElasticSearch running, but have no idea how to create a docker file from scratch myself as all the hard work was done by @manekenpix and @raygervais

Last goal I have is to also trim down the amount of issues I have assigned to myself. I think I had somewhere around 15 issues...

Wednesday, February 26, 2020

OSD700 Release 0.7

I tried to become a fullstack developer for this release. It... went better than I thought somehow? For this release I finally got to work on some front end. I remembered again why I like the back end so much more.

Thankfully, this release spanned 3 weeks instead of the normal 2 we're normally allotted. The first week I spent relearning how to use React and also learning how to use Material-UI. We're using Material instead of Bootstrap because some students in the class have developed a severe allergic reaction to Bootstrap. 

Author Result Component
Issue can be found here

I'm ashamed to admit, it took me at least 3+ hours just to make the search result component which can be found here (the time didn't include the design by @agarcia-caicedo). I'm very grateful I didn't have to design the component as well or we might not even have a component at all, also big thank you to @cindyledev for her reviews and suggestions as it made the process a lot quicker. 

Attaching the MyFeeds Component to the Backend
Issue can be found here

Wow, this took a whole day. Prior to this release, I saw our professor @humphd submit a PR to refactor some previously merged React code from class components to functional components and didn't understand what was going on... since it was front-end stuff. 

Well... a day was spent learning what functional components are(I learned React the class component way) and all the changes the React community have implemented. I read about how they are trying to steer people towards using React hooks instead of the previously used class components(although they are still supported). It took a while to understand (their docs are extremely helpful), still a newbie, but I understand it a lot more and have to admit, it is much better than their previous class components. Thank you to @Grommers00 for his work on the backend code changes and @Silvyre for creating the component + quick review.

Automatic Deployment to Staging on Master
Issue can be found here

I spent an afternoon with a frequent collaborator @manekenpix to figure out how this all works. It was pretty interesting as we thought it would take a very long time to get this working, but after 2 hours we had a simple server that was listening to issues filed in a private Github repository and would output a message. The next step would be to automate all the shell commands that are currently being manually executed by him whenever we want Staging to be updated. I am just hoping he gets to work on more issues aside from deployment because he has expressed how much he enjoyed coding.

Now, automatic deployment can range from being really simple to extremely complex. We can have an app that experiences some downtime as the app is updating to being really complicated where we apply the Green/Blue model and minimize any downtime.

Green/Blue - Two identical versions(Green and Blue) of the app will be available, one sits idle as the other one is running. When we bring down the one that is running (Green), we direct traffic to the other(Blue). This minimizes whatever downtime is being experienced as the current files in Green are being deleted, updated to the latest updates of the app from the GitHub repo and Green's containers are being spun up. Once Green is up and running again, we direct traffic back to the newly updated Green and idle Blue again. In this case the Blue can also act as a back-up which we can revert to if there are changes breaking the app or if Green is somehow experiencing issues.

Switch our app from REST to GraphQL
Issue can be found here

We implemented GraphQL and it works, but only if you go to http://localhost:3000/playground or https://dev.telescope.cdot.systems/playground. Out of all the issues this one was the most frustrating. I thought we were using Gatsby because it works well with GraphQL, I didn't know how it was going to work well, but it would just kind of work. This wasn't the case, we had to install an extra plugin called gatsby-source-plugin, It took a whole day for me to understand the changes I was making in gatsby-config.js 😂.

Anyways, this issue was so we can use all the queries we built for GraphQL to be used in the front end code too. A PR is ready to go which specifically addresses this issue... however moving forward some experimenting to understand how to use the data properly as well as what Static Query/ Page Query/ Static Query Hook are is going to be needed in order to use Gatsby properly with GraphQL

Overall, this release was a fun way to move away from just working on back end, refreshing/updating my knowledge on front end stuff and working on issues that were in between the two. I can't design things to save my life, but I can probably implement the design in React! Since I can do back end and front end things, I'm now a full stack developer right? Probably not.

For 0.8 alongside triaging issues, my goal is to try to get Kubernetes/minikube up for Production.

Sunday, February 9, 2020

Fullstack Developer Wanted??

I don't get these job postings.

There was apparently an infamous post written on Medium by a big named guy declaring Fullstack is dead. 

After working on Telescope, I think so too. The field is so broad now with ever increasing amounts of technology a developer is supposed to know. A typical fullstack developer posting on sites goes as follows

We're looking for a fullstack developer with the following experience:
  • Node.js + Express/Java/Golang/Python/.Net
  • Javascript, CSS, HTML5
  • Angular/React + Redux/Vue
  • Docker
  • No SQL: MongoDB/ Cassandra/ DynamoDB
  • SQL: Oracle/ MySql
  • AWS (may or may not include serverless functions)
  • GIT
Bonus if you have experience with the following:
  • Redis/Memcached
  • User Experience design
  • GraphQL
  • CI/CD: Travis, Circle
  • Kubernetes
  • JWT/Auth0
WTF? One of the students in the class has spent pretty much a semester if not two, just learning and trying to implement SSO for Telescope. I mean I understand this is a wishlist, but this is insane. You might as well hire my Open Source Prof, @humphd at this rate. Thanks.  

OSD700 Release 0.6

Worked on a few issues for Telescope for this release:

GraphQL documentation for Telescope
Issue can be found here

This was fun, I knew nothing about GraphQL going into this. By the end of it I was even hacking away at nested queries on my own branch which we haven't even implemented yet in Telescope. I always shied away from documentation, because I rather be coding. I guess it is true. To see if you've really learned something, you should be able to explain or teach it to someone else.

GraphQL filters for Telescope
Issue can be found here

Aside from documenting on how to use GraphQL, I also took on an issue which required me to rewrite some queries to allow filtering and support future search functionality for the front end. This taught me some pain points about GraphQL as I always assumed it could do stuff like a traditional database, fro example: select * from posts where posts > provided date or something along those lines. GraphQL cannot or rather is unable to support this without installing another library, I ended up writing my own logic to do filtering and pagination.

On a side note, I also learned people can publish scalars(GraphQL typing) in packages for other people to download and use.

Include logic to filter inactive feeds and invalidate inactive feeds for Telescope
Issue can be found here

Another issue I started over the Christmas weekend and finally finished. This went through a few iterations and in the end it was suggested to scrap the current code written in favor of a more Redis oriented solution.

Refactor promises for plumadriver
Issue can be found here

I was suggested this issue by our prof, since I did quite a bit of work on refactoring promises for Telescope. It was an interesting experience reading typescript code and contributing to another repository after a few months of just working on Telescope.

Wednesday, January 29, 2020

GraphQL Nested Queries

The whole point of GraphQL is its flexibility, I can view all the authors in the database and then I can add an additional query that can display all the books by the one author, we call these nested queries. I recently spent an afternoon + evening with @manekenpix to take a look at nested queries in GraphQL for the Telescope project.

We currently have a schema like below
  # 'Feed' matches our Feed type used with redis
  type Feed {
    id: String
    author: String
    url: String
    posts: [Post]
  }  # 'Post' matches our Post type used with redis
  type Post {
    id: String
    author: String
    title: String
    html: String
    text: String
    published: String
    updated: String
    url: String
    site: String
    guid: String
  }

Notice feed can also return an array of Post,  to allow nested queries, we have to define them in resolvers after the Query:

module.exports.resolvers = {
  Query: {
    //Queries are here
  },
  Feed: {
    posts: async parent => {
      const maxPosts = await getPostsCount();
      const ids = await getPosts(0, maxPosts);
      const posts = await Promise.all(ids.map(postId => getPost(postId)));
      const filteredPosts = posts.filter(post => post.author === parent.author);
      return filteredPosts;
    },
  },
};

What the above code does is get all Posts in the database, then filter the Posts only returning Posts that have the same author as the returned value of the feed author. For example if I'm running the following query in GraphQL

{
  getFeedById(id: "123") {
    author
    id
    posts {
      title
    }
  }
}

and the author name is Marie, the parent parameter that is provided to the nested query (posts) will be the results of the getFeedById which in this case the author name is Marie.

Real life data using a classmate of mine:



Friday, January 24, 2020

OSD700 Release 0.5

As part of 0.5 I was working working mainly on two issues and got a chance to help someone start contributing to Telescope.

Async/Await
I've blogged a bit about using async/await to replace our Promise code in Telescope. I started the working during the winter break and was finally able to get that merged this week. The issue actually took a while as it spanned across ~15 files in Telescope and had me refactor functions and tests at the same time, which admittedly was pretty scary. I can say I know how to use async/await a bit better, but there's still a long road ahead!

Kubernetes(minikube)
My other issue I've been working on is a collaboration between me and another classmate @manekenpix to deploy Kubernetes(minikube) on a site for Telescope at http:://dev.telescope.cdot.systems/planet. We've had success being able to deploy services and even got the ingress to work on our own machines locally. However after 5 hours of sitting down and lots of expletives yelled at the computer, we had an issue when trying to deploy it on the machine CDOT has prepared to host Telescope. We forgot minikube runs using a vm on the computer, exposing the service and deployment only really exposes it to the computer the vm is running on. After a bit of researching and asking around on the slack channel we have decided to try a Bridged connection to expose the vm to outside traffic. We're crossing our fingers to have this for 0.6 (hopefully).

Helping a new contributor
Lastly, our professor Dave Humphrey has been actively recruiting students from his other classes to participate on Telescope (where was this teacher when I started learning web development). Which I think is an amazing idea as they gain experience in filing/fixing issues, receiving feedback and just collaborating with other programmers on an open source project. One student took on a great starter issue to standardize the error code on the project. I was kind of a mentor in helping the contributor get their code merged. This gave me flashbacks to OSD600 where our professor pretty much spent the whole semester teaching git and helping students with their git problems. Long story short, the student was able to get their PR merged and is happily taking on another issue. Git is hard and it is even more so when things land daily if not every few hours, the student admitted he used git before, but wasn't used to the pace at which Telescope moved.

The mentoring also taught me something, our professor has started to emphasize the importance of submitting a PR with some work completed instead of a full fledged PR. This way if their current work is starting to go sideways the community can direct the contributor to the correct path, preventing them from going further down the wrong path. For example, the contributor I was helping out kept trying to rebase, apply their changes then commit to their PR all in one go and this kept failing. Instead, I asked the contributor to:
  1. rebase their PR and drop any of the unrelated commits, push code to their PR. At this point we'd review and see what other changes we need to make, such as do we have to use any files from master to the working branch because a file on the working branch is too far gone?
  2. if the current status of the PR looked good, we'd apply their changes to fix the issue and review to see what other changes we need to make
This approach worked a lot better and the contributor got their PR merged today!

In hindsight, I think I've become a better programmer. 4/5 months ago I was attempting to enhance another person's simple note taking app on github.

Sunday, January 19, 2020

Async Await and Promises

As a continuation of my PR for Telescope, I thought I should talk a bit about async/await and the old way of using return new Promise(). Here are a few examples of do's and don'ts:

// Async functions return promises, no need to add await 
// DON"T DO
async function returnsPromise(){
  return await promiseFunction();
}

// DO
async function returnsPromiseFixed(){
  return promiseFunction();
}

//---------------------------------------------------------------------------

// Don't use await when function is not async 
// DON"T DO
function noAsync(){
  let promise = await promiseFunction();
}

// DO
async function noAsyncFixed(){
  let promise = await promiseFunction();
}
//---------------------------------------------------------------------------

// Writing errors
async function f() {
  await Promise.reject(New Error("Error"));
}

// SAME AS
async function f() {
  throw new Error("Error");
}
//---------------------------------------------------------------------------
// Use try catch to wrap only code that can throw // DON"T DO async function tryCatch() { try { const fetchResult = await fetch(); const data = await fetchResult.json(); const t = blah(); } catch (error) { logger.log(error); throw new Error(error); } } // DO async function tryCatchFixed() { try { const fetchResult = await fetch(); const data = await fetchResult.json(); } catch (error) { logger.log(error); throw new Error(error); } } const t = blah(); //--------------------------------------------------------------------------- // Use async/await. Don't use Promises // DON"T DO async function usePromise() { new Promise(function(res, rej) { if (isValidString) { res(analysis); } else { res(textInfo); } if (isValidStrinng === undefined) { rej(textInfo); } }) } // DO async function usePromiseFixed() { const asyResult = await asyFunc() } // -------------------------------------------------------------------------- // Don't use async when it is not needed... Don't be overzealous with async/await // For example the sentiment module we're using is not an async function // DON"T DO module.exports.run = async function(text) { const sentiment = new Sentiment(); return Promise.resolve(sentiment.analyze(text)); }; // DO module.exports.run = function(text) { const sentiment = new Sentiment(); return sentiment.analyze(text); }; // -------------------------------------------------------------------------- // Avoid making things too sequential // DON"T DO async function logInOrder(urls) { for (const url of urls) { const response = await fetch(url); console.log(await response.text()); } } // DO async function logInOrder(urls) { // fetch all the URLs in parallel const textPromises = urls.map(async url => { const response = await fetch(url); return response.text(); }); // log them in sequence for (const textPromise of textPromises) { console.log(await textPromise); } } // --------------------------------------------------------------------------
// Examples
// refactor following function:

function loadJson(url) {
  return fetch(url)
    .then(response => {
      if (response.status == 200) {
        return response.json();
      } else {
        throw new Error(response.status);
      }
    })
}

// Solution:
function loadJson(url) {
  let fetchResult = await fetch(url);
  if (fetchResult.status == 200){
    let json = await fetchResult.json();
    return json;
  }

  throw new Error(fetchResult.status);
}

// refactor to use try/catch
function demoGithubUser() {
  let name = prompt("Enter a name?", "iliakan");

  return loadJson(`https://api.github.com/users/${name}`)
    .then(user => {
      alert(`Full name: ${user.name}.`);
      return user;
    })
    .catch(err => {
      if (err instanceof HttpError && err.response.status == 404) {
        alert("No such user, please reenter.");
        return demoGithubUser();
      } else {
        throw err;
      }
    });
}

demoGithubUser();

// Solution:
async function demoGithubUser() {
  let user;
  while(true){
    let name = prompt("Enter a name?", "iliakan");
    try {
      user = await loadJson(`https://api.github.com/users/${name}`)
      break;
    } catch (err) {
      if (err) {
        alert("No such user, please reenter.");
        return demoGithubUser();
      } else {
        throw err;
      }
    }
  }
}

// Call async from non-async
async function wait() {
  await new Promise(resolve => setTimeout(resolve, 1000));

  return 10;
}

function f() {
  // ...what to write here?
  // we need to call async wait() and wait to get 10
  // remember, we can't use "await"
}

// Solution:
function f() {
  wait().then(result => alert(result));
}

Saturday, January 11, 2020

Wednesday, January 8, 2020

Kubernetes Pt3


*Blessed*

Image result for angel singing meme

Thank you @manekenpix, I still have no idea how to fix all the problems we came across, but let us just enjoy this for now.

Monday, January 6, 2020

Kubernetes Pt2

In the previous post we used kubectl command lines to deploy. However we can create .yaml configuration files and have kubectl create them that way also

the .yaml file will have the following

apiVersion: (name)
kind: Deployment
metadata:
  name: (appName)
  labels:
    app: (imageTag)
spec:
  replicas: (replicaNumber)
  selector:
    matchLabels:
      app: (imageTag)
  template:
    metadata:
      labels:
        app: (imageTag)
    spec:
      containers:
      - name: (imageTag)
        image: (dockerImage)
        ports:
        - containerPort: (portNumber)

Then enter the following command:
kubectl create -f (.yaml file)

I pulled an example from the edx Kubernetes course using the nginx image to deploy a webserver

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webserver
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80

This will deploy an app named webserver replicated across three pods.

We can also define a appName-svc-.yaml file to expose our service with the following content:

apiVersion: (get this value from running kubectl api-version)
kind: Service
metadata:
  name: web-service
  labels:
    run: web-service
spec:
  type: (serviceType)
  externalName: (externalLink) *Use this field if serviceType is set to ExternalName
  ports:
  -  port: (portNumber)
     protocol: TCP
  selector:
    app: (imageTag)

Then enter the following command:
kubectl create -f (appName)-svc.yaml

serviceType can be any of the below:
  1. LoadBalancer - if the cloud provider Kubernetes is running on provides load balancing.
  2. ClusterIP - can only reach the service only from within the cluster
  3. NodePort - creates a ClusterIP and NodePort service will route to it. Allows access from outside the cluster by using NodeIP:NodePort
  4. ExternalName - maps the service to the contents of the externalName field
Also pulled from the edx Kubernetes course:

apiVersion: v1
kind: Service
metadata:
  name: web-service
  labels:
    run: web-service
spec:
  type: NodePort
  ports:
  -  port: 80
     protocol: TCP
  selector:
    app: nginx

Kubernetes

Containers became all the rage nowadays, I have 0 experience with either Docker or Kubernetes. This post serves to kind of explain some concepts Kubernetes for myself.

Pods - can be made up of one or more containers. Pods can also be replicated horizontally to allow scaling of an app.

Deployments are used to manage pods, to deploy a pod we use the following line
kubectl create deployment (appName) --image=(imageName)

kubectl get deployments - will display all current deployments

kubectl get pods - will display all pods 

kubectl get events - will display all the things that have happened, such as new pods

Although we have created a deployment for our pod, it is only accessible within the Kubernetes cluster. A Service is enables access to the deployed App, to create a Service we have to use the following command

kubectl expose deployment (appName) --name=(serviceName) --type=LoadBalancer --port=(portNumber)

*if the --name=(serviceName) flag is not provided, the service will default to the appName
*--type= can be any of the below:
LoadBalancer - if the cloud provider Kubernetes is running on provides load balancing.
ClusterIP - can only reach the service only from within the cluster
NodePort - creates a ClusterIP and NodePort service will route to it. Allows access from outside the cluster by using NodeIP:NodePort
ExternalName - maps the service to the contents of the externalName field

We can verify the Service has been created by using the following command:
kubectl get services , this will display all the Pods exposed 

minikube service (serviceName) , will launch the service within the pod.


Technically the steps we need to follow to deploy an app on Kubernetes
1. Create a deployment to manage (kubectl create deployment (appName) --image=(imageName))
2. Expose the deployment kubectl (kubectl expose deployment (appName) --name=(serviceName) --type=LoadBalancer --port=(portNumber))
3. Run the service (kubectl service (serviceName))

To replicate the pods we use the following command
kubectl scale deploy (appName) --replicates=(replicateNumber)

On a side note, this also lets us manage some deployments on the fly. Say our current image is not compatible with other images, we can change the version by using the following command.
kubectl set image deployment (appName)=(imageName)

Kubernetes tracks histories of all changes made to the deployment, such as when changing the image for a deployment. They can be viewed with the following command
kubectl rollout history deploy (appName)

When changes are made to the image, Kubernetes will automatically scale down replica sets of the deployment with the old image and automatically spin up the same number of replicas for deployment with the newer one. We can verify this by using
kubectl get rs -l app=(appName)

To rollback changes made to a deployment we use the following command. The revisionNumber can be any of the ones listed when running the command kubectl rollout history deploy (appName)
kubectl rollout undo deployment (appName) --to-revision=(revisionNumber)

When rolling back changes, a new revision will be made and it will also remove the revision number of the one we rolled back to. For example I initially deployed with an image of version 1.15 and changed the image to version 1.16. There should be a total of 2 revisions: 
  • 1 (my initial image of version 1.15)
  • 2 (my current image of version 1.16)
If I roll back to revision 1 with the above command, a new revision will be added to the table 3, and revision 1 will be removed. My history will now look like the following:
  • 2 (image of version 1.16)
  • 3 (image of version 1.15, I rolled back to)
Kubernetes tracks up to 10 revisions for your rollback pleasure.

To delete the deployment use the following command
kubectl delete deployments (appName)

Contains Duplicate (Leetcode)

I wrote a post  roughly 2/3 years ago regarding data structures and algorithms. I thought I'd follow up with some questions I'd come...