# Why is it when I run the scikit-learn Lasso model, that I get the exact same value for every prediction?

I see that every number is exactly the same for the data that I have. For some reason, when I run other models, I get pretty good scores.

My code:

```
from sklearn.linear_model import Lasso
lasso = Lasso()
lasso.fit(X_train, y_train)
Score the model lasso_score = lasso.score(X_test, y_test) lasso_score
Make predictions using the testing set lasso_pred = lasso.predict(X_test)
print("Root mean squared error: %.2f"
% sqrt(mean_squared_error(y_test, lasso_pred)))
plt.scatter(y_test, lasso_pred) plt.xlabel('Measured') plt.ylabel('Predicted') plt.title('Lasso Predicted vs Actual') plt.show()
```

The results:

```
array([18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486, 18.45183486, 18.45183486,
18.45183486, 18.45183486, 18.45183486])
```