I agree with the point made in section1 “automating research changes the definition of knowledge”, that using big data for research and learning changes the way that we learn from information. Bigdata analysis is a powerful tool and as it becomes even more prevalent due to the increasing availability of rich datasets its strengths and limitations will influence the way that we learn from information. Just as how even simple statistics can be very misleading the results of bigdata analysis emphasize certain types of patterns and deemphasize others. Bigdata analysis may cause researchers to avoid fields that have little data already available or gain knowledge from outlier data points that are often omitted in bigdata analysis.

I also agre with the points maind in the “Not All Data Are Equivalent” section. It is extremely easy to assume the amount of data that indicates a trend means that a trend is present with out considering the strength of each data point. The usefulness of a bigdata set is difficult to quantify. I have heard claims before from companies that use bigdata analysis to find trends in their data that their dataset is extremely useful because of the number of terabytes of data that they have. Often times they provide very little information about exactly what the terabytes of data contains. More data is definitely better when it comes to spotting trends in data, but the incitefulness of a dataset given its context is effected by much more than just its size. When analysing a large data you have to consider how meaningful each data point in the context of the question you are trying to anser.