Difference between Unigram and BoW model

I think I understand basically what n-grams are. I have read similar articles but I am still confused as to how the features would be represented. I have a couple of questions. When we create a BoW feature representation for say 'Test apple ball cat apple cat dog', the vocabulary would have 5 words and a feature representation of the sentence would be a frequency count of the words [test; apple; ball; cat; dog] which in this case would be [1, 2, 1, 2, 1]

In unigrams, what exactly is different from this? What would a similar feature representation look like? Do we continue to keep track of frequencies or does it become a probability. Also, when implementing something like logistic regression, will feeding just the new feature representation work? (Discount any requirements for overfitting etc and hyper parameter tuning, I only mean theoretically working)

Finally, what happens in the case of bigrams? I have read that we build a n-dimensional table for n gram model. Lets assume I figure out how to construct one, how do I perform Logistic Regression using the table?

How many English words
do you know?
Test your English vocabulary size, and measure
how many words do you know
Online Test
Powered by Examplum