You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Words like "nymphomaniac" aren't bad words, per se, but if you're using the package to generate temporary usernames, passwords, identifiers, etc. they could be potentially embarrassing choices.
Are there any plans to have a way to fetch only "safe" or "unsafe" words? If someone did fork this, would you approve such a pull request?
The text was updated successfully, but these errors were encountered:
Currently I don't have any plans to do this. The problem is twofold: (1) there are 25488 words to look through and judge the safety of, (2) safety is subjective. If you can think of a way to overcome these problems, I'm all ears.
What I've done in the past for your kind of use case is to grab a much smaller set of words that you know are safe and use those. For example, in a Discord bot I've worked on I've used a list of common animal names I scraped off the web somewhere and used those for identifiers (as I've done here). Hope this helps.
Words like "nymphomaniac" aren't bad words, per se, but if you're using the package to generate temporary usernames, passwords, identifiers, etc. they could be potentially embarrassing choices.
Are there any plans to have a way to fetch only "safe" or "unsafe" words? If someone did fork this, would you approve such a pull request?
The text was updated successfully, but these errors were encountered: