why my site is not crawled by google (my site is in a directory of root folder)
I have 2 WordPress in my host. My main WordPress in root folder and work correctly and index by google. I have a directory in root folder and second WordPress is in this folder and works fine. i have a serious problem, my second WordPress not indexed by google . I submit request indexing from google search console but not working . i have robbot.txt and sitemap.xml for two sites and every thing is correct but still this problem not solved. it is a Forum and very important for me to index by google.
Search Engine Visibility:Discourage search engines from indexing this site
is disabled on your Reading Settings -> Can check through Setting -> Reading
See also questions close to this topic
Scrolling effects on wordpress
I have added two images on a page. There one image is showing on the page as soon as the user opens it. The user scrolls it and it should take the user to the next image leaving immediately without normal scrolling. For clear, I want smooth scrolling that takes the user to the next element when he scrolls. Like the Apple Airpods pro website https://www.apple.com/in/airpods-pro/. Please help me
- Login validation not working on my account page(wordpress)
Is there any plugin in wordpress in which i can create blank table like we do in MSword
I am Using Wordpress for one of my website I am looking for plugin or way create blank table in WordPress posts as I used table press and other plugins but they are useless as I have to put data in it and then insert into posts and it will take double work... what I am looking for is to create blank table as per row and column then insert data...like we do in MSWORD.
Thanks in Advance
Remove unused CSS - How to fix it on Pagespeed?
I have a brazilian client and one of the goals is to let 100% speed test on Google pagespeed. I just can't fix one of the issues: Remove unused CSS. Is one of the solution use Inline CSS?
The website: www.euamocupons.com.br
What is Script Evaluation?
I'm wondering how i can fix the script? Basically I have a couple of questions in relation to that
- What is Script Evaluation time?
- Does the evaluation means downloading, parsing and compiling?
Sitemap implementation for blog website using react
So I have been working on with a react single page application for blog post, and create a dozens of blogs in it but how should I implement sitemap for the website using react.I have generate sitemap.xml file as well, should i just add/push it in production or should i build ui for the sitemap...
Let say this is my sitemap.xml
<url set xml ="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>https://dummysite.com/</loc> </url> <url> <loc>https://dummysite.com/about</loc> </url> <url> <loc>https://dummysite.com/contact</loc> </url> <url> <loc>https://dummysite.com/copy-rights</loc> </url> <url> <loc>https://dummysite.com/post/12</loc> </url> <url> <loc>https://dummysite.com/post/13</loc> </url>url> <loc>https://dummysite.com/privacy-policy</loc> </url> <url> <loc>https://dummysite.com/terms-of-service</loc> </url> </url set>
what should i do with this...Please help me out,
Google Page reject to index
My page uses to get indexed and was working normally, but from a couple of days, console start showing the too many redirections error which was not in the past. But the site was normal and is not edited by anyone. From that day which shows the error of too many redirections, slowly my pages are not in google anywhere. It also says that google bot is refusing to crawl but when i look at robots.txt file it was normal which is
Sitemap: https://trekkingteamgroup.com/sitemap.xml User-agent: * Disallow:
whenever i try to reindex it refuse. Why is that problem is occuring, i don't have any idea.
I have checked all the things but couldn't able to fix and now i am contacting to the google. Please help to sort out the error.
How to fix search engine results that are appearing as Chinese characters?
When typing in search engine site:mysite.com it shows Chinese characters on the list. How can we remove it and prevent spam URLS. Please provide steps on how to fix it.
How can I scrape social media sites the same way Google does? Does Google use each site-specific API, or site-nonspecific crawling?
How can I scrape social media sites the same way Google does? Does Google use each site-specific API to extract site contents, or site-nonspecific crawling?
Malicious URL's crawled by google not detected
One of the website I have has hundreds of malicious URL's that has been crawled by google that we can't seem to find. There's no trace of those files and folders inside the server. This website has been hacked and was re-coded without using any of the files from the previous website.
This is a WordPress website hosted on flywheel that uses Genesis and a paid child theme. Here's the plugin list
Here's a few examples of the said urls
What do I do to prevent this from happening again?
Can a old Redirect 301 from .htaccess be removed if the source url is marked as excluded in the Google Search Console?
I'm cleaning up my huge
.htaccessfile which got bloated over the years.
There are many
Redirect 301's in my
.htaccessfile which are years old. I see few of the old/source urls in the
.htaccessmarked as excluded in the google search console. Now, is it safe to remove these entries in the
.htaccess? Can I assume that these redirects are unnecessary now ?
Redirect 301 /xxxxx https://www.yyyyyy.com
Google Search Console -> Index -> Coveragereport, I see the
/xxxxxis marked as the Excluded. So, now am I good to remove the above entry from
.htaccessas google already indexed a canonical version of it?
What is the behavior Google expects of a sitemap link?
I need to provide a sitemap for my new website. I already have it ready on a external link, as an XML file. The thing is: I'm having a lot of trouble in the "providing" part.
I've seen some sitemaps that you just acess the link like:
And the browser starts the download right away, doesn't load a page or anything, just a plain download link.
My actual doubt is: does google expects my sitemap link to download the file, or i can just render an XML response on page?
I am using plain scala and lift, and it's... complicated to create a "automatic download link" behavior, but i easily managed to create an XMLResponse on my /sitemap.xml link (my xml file loads perfectly on the page).
thanks in advance!