Hüseyin EROĞLU

10/20/2020

About Me

Hi! My name is Huseyin EROGLU. I graduated from Political Science and Public Administration in 2016. After graduation, I started to work in Vakifbank as Internal Controller. As an Internal Controller, my jobs are to check the compliance of the bank legislation with the SPK and BDDK legislations and with other regulations, supervising the processes of workflow of the units, and give advices to the board.

Because of the technological changes in financial sector such as cryptocurrencies, and robot usage in financial decision-making process, I thought that banking sector might be at risk and if it transforms itself to use technology and big data. Banking sector has huge amount of customer data but mostly do not know how to use it efficiently. Therefore, I choose to do my master in the field of Big Data Analysis to use Big Data in Banking Sector. Check my Linkedin Profile: https://www.linkedin.com/in/h%C3%BCseyin-ero%C4%9Flu-3a8740130/

useR2020

Using R to Support COVID Response at the Health System

In this video, Corry Frich from UW Health Organization, give information about their works as an organization during Covid Pandemic. They used R to support to get quick Covid responses. The research conduct in 7 hospitals in two regions. They have access to electronic health records and also data warehouses. By using data, they model predictions about next months. He said that RStudio allows them to getting information easily which are publicly available, crate scripts to be available for daily usage and analyze publicly available data with their local data. Using R, enable them to create SEIR (susceptible, exposed, infected, resistance) Model. They can update, review and get predictions from model by using R.

R Posts Relevant to My Interests

Visualizing the Capital Asset Pricing Model

This post is about how to use R for Capital Asset Pricing. The Model analyze the linear relationship between our portfolio returns and the market returns and the riskiness of our portfolio - how volatile the portfolio is relative to the market. The model compares our portfolio returns with other asset returns by using expected amount of return and standard deviation of the possible return. In the data we use in that exercise, it seems that there is strong relationship between our portfolio and market returns. The asset_returns_xts has a date index, not a column. It is accessed via index(asset_returns_xts). asset_returns_long has a column called date, accessed via the $date convention. Second, notice the first date observation for January of 2005. asset_returns_long contains NA, and asset_returns_xts excludes the observation completely. Third, asset_returns_xts is in wide format, which in this case means there is a column for each of our assets. This is the format that xts likes, and it is the format that is easier to read as a human. In conclusion, The xts and tidyquant object have their own uses and advantages depending on our end goal. Next time we will think about how to visualize portfolio returns, and how the different objects fit into different visualization paradigms.

You can read this post here: https://rviews.rstudio.com/2018/03/02/capm-and-visualization/

Introduction to Portfolio Returns

This post is about a work on transforming daily asset prices to monthly portfolio log returns. First, we import daily prices for the five assets and using two models to adjust daily prices to monthly log returns. In the first method, we use the xts world and for the second one, we use tidyverse/tidyquant world. There are differences between them.

You can read this post here: https://rviews.rstudio.com/2017/10/11/from-asset-to-portfolio-returns/

Three Strategies for Working with Big Data in R

The post explains three strategies for thinking about how to use big data in R, as well as some examples of how to execute each of them. The first strategy is Sample and Model. It means down sampling your data to a size that can be easily downloaded in its entirety and create a model on the sample. Second one is Chunk and Pull. It means the data is chunked into separable units and each chunk is pulled separately and operated on serially, in parallel, or after recombining. Third one is Push Compute to Data. It means the data is compressed on the database, and only the compressed data set is moved out of the database into R.

You can read the post by clicking here: https://rviews.rstudio.com/2019/07/17/3-big-data-strategies-for-r/