TypeError: expected string or bytes-like object when i Delete stop words from files

I have this project and I have a set of files located in a folder called "Corpus", and I want to delete all the "stopwords" from these files, so I used a For loop and went through all the words of the files and collected the words in an array called "words_In_document", then I made tokens to separate the words, and then extracted the stop words and deleted them from this files.

But I had this error:

Traceback (most recent call last):
  File "C:\Users\Super\PycharmProjects\pythonProject\All-Orders\first-request.py", line 20, in <module>
    tokenize_sentence = word_tokenize(words_In_document)
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\__init__.py", line 130, in word_tokenize
    sentences = [text] if preserve_line else sent_tokenize(text, language)
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\__init__.py", line 108, in sent_tokenize
    return tokenizer.tokenize(text)
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\punkt.py", line 1274, in tokenize
    return list(self.sentences_from_text(text, realign_boundaries))
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\punkt.py", line 1328, in sentences_from_text
    return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\punkt.py", line 1328, in <listcomp>
    return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\punkt.py", line 1318, in span_tokenize
    for sl in slices:
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\punkt.py", line 1359, in _realign_boundaries
    for sl1, sl2 in _pair_iter(slices):
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\punkt.py", line 316, in _pair_iter
    prev = next(it)
  File "C:\Users\Super\PycharmProjects\pythonProject\venv\lib\site-packages\nltk\tokenize\punkt.py", line 1332, in _slices_from_text
    for match in self._lang_vars.period_context_re().finditer(text):
TypeError: expected string or bytes-like object

How can I modify the code in order to correct the error and to remove stop words from the files?


from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
import os

print("Hello in the first request....")

documents = os.scandir('C:/Users/Super/Desktop/IR/homework/Lab4/corpus/corpus')
words_In_document = []
totalDocuments = 0
token_without_sw = []
for document in documents:
    with open(document, 'r') as doc:
        for line in doc:
            for word in line.split('/n'):

    totalDocuments += 1

tokenize_sentence = word_tokenize(words_In_document)

tokens_without_sw = [word for word in tokenize_sentence if not word in stopwords.words()]