Wiktionary:Grease pit/2022/January

Definition from Wiktionary, the free dictionary
Jump to navigation Jump to search
discussion rooms: Tea roomEtym. scr.Info deskBeer parlourGrease pit ← December 2021 · January 2022 · February 2022 → · (current)

{{metathesis}}[edit]

Happy New Year everyone!

{{metathesis}} was created by an IP editor a month or so ago. The cap=1 paramater has been implemented using MediaWiki template logic rather than Lua, so it throws an ugly error when used: Metathesis. Could someone with the necessary rights go in and check it out? Thanks! This, that and the other (talk) 03:21, 1 January 2022 (UTC)[reply]

@This, that and the other: After looking at the Lua code, I noticed it had an "ignore-params" parameter for parameters that aren't used by the Lua code, so I added |ignore-params=cap to the template, and that seems to have fixed it. Chuck Entz (talk) 04:09, 1 January 2022 (UTC)[reply]
Thanks Chuck! This, that and the other (talk) 04:55, 1 January 2022 (UTC)[reply]

automated creation of several hundred entries[edit]

I have an open-source project I'm working on, https://github.com/bcrowell/ransom/tree/master/glosses , as part of which I'm in the process of compiling a complete set of English definitions for all the Greek words appearing in the Iliad. The project is under the same license as Wiktionary, and in fact many of my entries are paraphrases of Wiktionary's or even verbatim copies. However, quite a few are based on public-domain sources, especially a 1924 dictionary by Cunliffe. Roughly a third of them are for words that presently have no entries in Wiktionary, totaling at present about 600. It would be quite trivial for me to write a script that would generate basic Wiktionary entries for these.

However, I want to make sure that I do this in a way that is helpful to Wiktionary and doesn't inadvertently create the need for a lot of extra work by folks here. I'm thinking I would probably do a dozen words or something at first, ask for comments, and then do more. Or I could generate an online file of all the entries, which folks could then examine and comment on before I upload any at all. If I have the stamina to complete the project, then I would be doing many more of these on an ongoing basis, probably ultimately amounting to a few thousand new entries.

Technical things I could use help with: (1) a script to upload an entry (something that runs on linux); (2) thoughts on how to avoid duplication. The main thing I'm concerned about in terms of duplication is that there can be Homeric forms that are just respellings or contractions of Attic forms that already have entries in Wiktionary. To some extent I have this covered already because I can easily look up what is Project Perseus's lemma for a given word. Usually this is the Attic form. So for example, Homer has ξεῖνος, whereas the Attic form is ξένος, but I can easily detect this on an automated basis and avoid creating a redundant entry for ξεῖνος. I also have data on frequencies of words, so another pretty straightforward precaution would be not to upload any new entry for a word whose frequency is above some cut-off -- such a word would be likely already to have a Wiktionary entry.--Fashionslide (talk) 16:34, 1 January 2022 (UTC)[reply]

This is very exciting! From a non-technical standpoint I fully support you in this endeavour. Our coverage of Ancient Greek has many gaps and could really use some kind of corpus-based or dictionary-based import. I'm coincidentally in the middle of a similar project to import a bunch of missing Latin entries.
It sounds like you've given this a good deal of thought. Based on my experience so far with the Latin entries, it's important to try and prevent mistakes from creeping in, as Ancient Greek entries don't always get a lot of attention from critical eyes. Perhaps @Mahagaja might have some thoughts. This, that and the other (talk) 01:20, 2 January 2022 (UTC)[reply]
Very cool to see what you're doing -- obviously great minds think alike :-) I'm surprised that you're excluding hapaxes, but maybe I need to think about that more. I would think that the Homeric hapaxes would be some of the most straightforward words to include. Often they're just straightforward compounds like παλινάγρετος. And I certainly wouldn't want to miss πτύω, to spit. Have you gotten to the point of looking at scripting the actual creation of the entries?--Fashionslide (talk) 01:41, 2 January 2022 (UTC)[reply]
There are a few reasons why I'm not doing this particular import fully automatically. The scans of the dictionaries have been automatically parsed (by Perseus) based on OCR detection of bold and italic formatting in the originals, and the results for Lewis and Short are so patchy that a fully automated import would be a disaster. Elementary Lewis came out better, but it lacks some senses and grammatical info. Plus, I enjoy researching and writing etymologies, so I'm taking the time to add those manually. That's just a choice I made; the entries would be totally fine without etymological information and someone else would eventually come along and add it.
As for hapaxes, I'm skipping them because I want to use my limited time in the most optimal way possible. If I could do the import fully automatically I would not be so worried about that. (Spurious forms would still need to be removed though. Sometimes L&S crossreference A to B, then at the entry B it says that it is "not spelled A".)
It's definitely possible to fully script the creation of entries if your data is cleaner than mine - perhaps someone who has done a similar project before might be able to comment here. This, that and the other (talk) 05:36, 2 January 2022 (UTC)[reply]
I see, thanks for explaining. Like you, I'm writing the definitions myself. I guess the difference is just that I've already got them compiled for a separate project. All I want to automate is the final step of putting them on Wiktionary.--Fashionslide (talk) 13:32, 2 January 2022 (UTC)[reply]

It looks like the standard tool for making bots for WP is pywikibot, and WP has an elaborate set of policies for proposing, testing, and approving bots. Does anyone know whether pywikibot works for wiktionary, and whether wiktionary has a similar formal process?--Fashionslide (talk) 15:52, 2 January 2022 (UTC)[reply]

I am willing to be corrected by others, but I would say that the automated creation of a finite set of entries that you've prepared yourself isn't truly a "bot" in the scope of WT:Bots. A bot is a script that goes ahead under its own steam and edits pages without supervision for an indefinite period of time. The Ancient Greek creation could very well take place under your main user account using pywikibot (albeit with a time lag between edits to avoid flooding Special:RecentChanges). This, that and the other (talk) 00:53, 3 January 2022 (UTC)[reply]
@Fashionslide, This, that and the other pywikibot works for Wiktionary. I use it for all the bot work I do, along with mwparserfromhell, which works well for parsing MediaWiki templates if you want to write a script to modify existing entries. I have written scripts to generate Russian, Bulgarian and Ukrainian entries from manual specifications, and scripts to push entries to Wiktionary, and I have added a lot of entries in this fashion (esp. Russian entries). So yes, this is definitely possible. My scripts are written in Python and run on MacOS, so they should work in Linux with few if any changes. As for ξεῖνος vs. ξένος, it is useful to have both, of course without duplication; one should simply point to the other. As for bots, Wiktionary does have a formal process for getting a bot account. Running a bot using your own account rather than a bot account is possible but normally not a good idea. In this case it might be acceptable in the short run (while waiting to get a bot account approved), but if you plan to stick around a bit it would be a good idea to start the process to get a bot account. It's not a huge pain to do so, but it does require a vote, which usually lasts two weeks. Benwing2 (talk) 03:53, 3 January 2022 (UTC)[reply]
@This, that and the other, Benwing2 Thanks, Benwing2, that's very helpful. I will try cautiously getting started with pywikibot. If I get something working that seems OK, I will ask for feedback and initiate the process of requesting a separate bot account.--Fashionslide (talk) 16:44, 6 January 2022 (UTC)[reply]

You folks have been super nice, but the unrelenting hostility and dysfunction of Wikipedia has prompted me to mung the password on my Fashionslide account on both WP and Wiktionary and to stop contributing to both projects. If anyone is interested in continuing this project, here is the software I wrote to generate wiktionary files https://github.com/bcrowell/ransom/tree/master/bot , and here is its current output: http://lightandmatter.com/wiktionary_greek_entries.txt (In my browser, the Greek characters in the file show up munged, but if I actually download the file and look at it, it's fine.)--Fashionslide (talk) 17:24, 8 January 2022 (UTC)[reply]

@Fashionslide Very sorry to hear that. What happened on the Wikipedia side? In many ways they are quite separate from Wiktionary; same underlying MediaWiki software but otherwise one is independent of the other. Wikipedia policies do not apply here (and vice versa). Benwing2 (talk) 20:56, 8 January 2022 (UTC)[reply]

References: bullets and numbers[edit]

According to Wiktionary:References, references should be preceded by a bullet point, as shown in the example provided. However, if I have inline references, e.g. in an etymological section, which need to be displayed as numbers, I have to use <references/> beneath the reference header. This tag prepends numbers to references, rather than bullet points, leading to inconsistencies between entries which have references only at the bottom and those which also have inline references; cf. Сараево (Saraevo) and течнокристален екран (tečnokristalen ekran). How should this be avoided? Martin123xyz (talk) 15:33, 2 January 2022 (UTC)[reply]

The references section in Macedonian Сараево (Saraevo) is actually misused (should be "Further reading"). See Wiktionary:Votes/2016-12/"References"_and_"External_sources" and Wiktionary:Votes/2017-03/"External_sources",_"External_links",_"Further_information"_or_"Further_reading". Fytcha (talk) 15:42, 2 January 2022 (UTC)[reply]
@Fytcha The reference section at Macedonian Сараево (Saraevo) may be misused, but it is in keeping with Wiktionary:References, which says that "references referring to an entry as a whole, or many parts of an entry should be listed directly under the ===References=== header, usually preceded by a bullet (*). However, there is no formal policy on when to use the <ref></ref> syntax and when to use bulleted lists." Daniel Carrero has written that he "edited WT:EL to conform with the results of this vote", but WT:EL still contains a link to Wiktionary:References, where the example for "water" is formated like the references at Macedonian Сараево (Saraevo). This contradicts point 3 of the 2016 vote, which passed except for point 4, so it seems that the results of the vote were not implemented on all pages where they should have. This is a problem because users looking for information regarding the proper way to format entries are much more likely to look up policy pages than vote pages, which they may not even know how to find. When I need information regarding references, I naturally type "wiktionary" and "references" into Google and then read Wiktionary:References. If this page had been updated, perhaps there would not have been so many Macedonian pages with a reference section corresponding to what should be a further reading section, as I am just finding out. Martin123xyz (talk) 16:22, 2 January 2022 (UTC)[reply]
I also cannot reliably discern the difference between the two headers, neither from the policy pages nor the votes, and the issue with numbered references seems to have been completely ignored ever since the regulation that “the reference section requires using footnotes marking the specific statements […]” has not passed, Fay Freak (talk) 16:32, 2 January 2022 (UTC)[reply]

alternative spelling vs alternative form[edit]

Is foo-bar an alternative form of foobar or an alternative spelling? How about foo bar? General Vicinity (talk) 19:24, 2 January 2022 (UTC)[reply]

@General Vicinity I would say alternative spelling. Alternative spellings differ only in spelling, while alternative forms usually differ in some other property as well (pronunciation, ending, inflection, etc.). Benwing2 (talk) 03:55, 3 January 2022 (UTC)[reply]

Words used solely by non-native speakers[edit]

Moved to WT:TR#Words used solely by non-native speakers DCDuring (talk) 16:22, 3 January 2022 (UTC)[reply]

Link changed from BP to TR as the Tea Room is where it was actually moved. - -sche (discuss) 23:37, 4 January 2022 (UTC)[reply]

Bot needed? (review template wikipedia links)[edit]

I looked at entry disuse which has had a {{wikipedia}} link in it since the entry was created in *2006*. There is certainly no such article at Wikipedia now. And I doubt it was deleted yesterday because I can't find any such tracks.

Experimenting, I checked a few searches with "insource:/[{][{]wikipedia[}][}]/" and relatively quickly found another mis-linked example, Malukus. There isn't such an article at Wikipedia, though there is a disamb page w:maluku, and w:Maluku Islands, w:Maluku (province), and a few others.

I'm thinking a bot that reviews entries here and notes all the {{wikipedia}} mis-links somewhere would be a nice little project, and then that generated list could drive a cleanup effort. Simply deleting the mis-links would be inappropriate given the example above Malukus, but then that example also points out that entries here can be less than precise (see that definition) and need repair.

Not knowing where to mention this request I mentioned it at WT:ID and they suggested coming down this avenue. As noted there I can't implement this myself now. Could someone consider this project? Shenme (talk) 00:43, 3 January 2022 (UTC)[reply]

If someone makes the list, I'll give it a run-through. bd2412 T 01:32, 3 January 2022 (UTC)[reply]
I generated a list at User:This, that and the other/broken Wikipedia links/2022-01-01. As you can see, the vast majority are links to articles ending in "language" or "phonology" which need to be created as redirects on Wikipedia and/or fixed in our Lua modules. The list also includes links to non-main-namespace Wikipedia pages; for example, our entry Cyclorrhapha links to w:Talk:Brachycera. This talk page does exist, but I don't think it makes sense for our entry to be directing the reader to what is an internal Wikipedia work page. This, that and the other (talk) 03:06, 3 January 2022 (UTC)[reply]
Okay I see now that list isn't super useful, and it wasn't quite what you were asking for either, as it included all links to Wikipedia, however constructed. How about User:This, that and the other/broken Wikipedia links/2022-01-01/only via wikipedia template? This, that and the other (talk) 03:33, 3 January 2022 (UTC)[reply]
There are many more Wikipedia templates and redirects, such as {{wp}}, {{Wikipedia}}, {{wiki}}, {{pedia}}, {{slim-wikipedia}}, {{slim-wp}}, {{swp}}, {{in wikipedia}} and probably more. DTLHS (talk) 03:44, 3 January 2022 (UTC)[reply]
My list accounts for {{wikipedia}}, {{wiki}} and {{wp}} only for now. This, that and the other (talk) 03:57, 3 January 2022 (UTC)[reply]
I added {{slim-wikipedia}}, {{pedia}} and their redirects to the list as well. Happy cleaning! This, that and the other (talk) 05:00, 3 January 2022 (UTC)[reply]
I've been working on the Translingual entries with these problems, which are a large share of the total. In the future, if it isn't too much trouble, could you segregate these? That would make both the Translingual entries and the others easier to deal with. Also dividing the list into sections of 20, 50, or even 100 would make striking or deleting the ones that have been corrected much easier. DCDuring (talk) 14:36, 5 January 2022 (UTC)[reply]
The Translingual entries, especially, would also benefit from the same kind of lists for Commons and Wikispecies. For all of these, once the backlog is cleaned up an annual run of each would be helpful because Wikipedia articles are deleted or moved, often without due consideration of the need of other wikis for redirects to the new title. Oh, yea, THANKS. DCDuring (talk) 14:41, 5 January 2022 (UTC)[reply]
Thanks for the feedback - I'll regenerate the list from the next dump with your suggestions in mind. This, that and the other (talk) 01:26, 6 January 2022 (UTC)[reply]

Taser[edit]

Why is my edit on taser being reverted? It is a constructive edit. Vandalism would be writing “COUNTRYBOY603 WAS HERE” in all capitals. --75.166.166.170 03:18, 3 January 2022 (UTC)[reply]

Vandalism isn't just the addition of bad content (analogy: spraypainting graffiti), it's also wanton removal of valuable content for no reason (analogy: destruction of property). You removed:
  • a relevant link from the etymology section,
  • the number of syllables from pronunciation,
  • a valid anagram,
  • a recording of the pronunciation in Dutch
As for the substantive revision, I'm not sure I agree with that change either, but at least it's not clearly vandalism. You changed the definition so that tasering must result in unconsciousness, and the target must be a person. However, we can find or imagine uses of the verb "taser" where neither of those applies, e.g., "police tasered the dog to no effect". 70.172.194.25 03:45, 3 January 2022 (UTC)[reply]

Reducing Lua memory errors[edit]

@This, that and the other, Eruton I see that User:This, that and the other created {{inh-lite}}, {{m-lite}} and friends to reduce memory usage. I'm thinking of another approach, which is to do the equivalent of {{multitrans}} for large chunks of a page. {{multitrans}} is used to wrap translation tables and ensures that the code to implement {{t}}, {{t+}} and friends is loaded only once. Essentially, you wrap the whole translation table in {{multitrans}} and replace all occurrences of {{t}} with {{tt}} and {{t+}} with {{tt+}}. These latter templates are just pass-throughs, i.e. they do nothing but generate special text, which is interpreted by {{multitrans}} to make calls to the translation module. On pages with large translation tables, it makes a massive difference: I've seen it reduce the memory from over 52M (the limit) to around 25M. This was motivated by a change I made to Module:table, where I added three new small functions shallowContains(), shallowTableContains() and shallowInsertIfNot. By itself this increased the memory usage of some pages by over 3MB, leading to memory errors. Deleting these three functions in [1] resolved the issue; but it shows that even small code changes made to commonly used modules can have huge effects when the modules are loaded over and over and over. Loading the modules just once can save a huge amount of mmeory. The new template might be called {{reduce-memory}} or something and might handle something like {{m}}, {{l}}, {{inh}}, {{bor}}, {{cog}}, {{der}}, {{ux}}, {{uxi}}, {{lb}} and {{q}}/{{i}}/{{qualifier}}/{{qual}}. Each handled template would need a pass-through equivalent; not sure what to name the pass-throughs, but it should be easy to type; maybe {{*m}}, {{*l}}, {{*inh}}, etc.? The advantage over {{l-lite}}, {{inh-lite}} etc. is that the new pass-through templates use exactly the same syntax as the regular templates and handle most or all of their features, rather than having only a limited subset as the "lite" templates currently do. Thoughts? Benwing2 (talk) 08:29, 4 January 2022 (UTC)[reply]

@Erutuon Sorry, typo :( ... Benwing2 (talk) 08:29, 4 January 2022 (UTC)[reply]
Also pinging @Surjection, who has done significant Lua hacking, and @Rua, who came up with the original idea for {{multitrans}}. Benwing2 (talk) 08:30, 4 January 2022 (UTC)[reply]
I was thinking about this over the New Year and I am beginning to believe that, to a certain extent, we are barking up the wrong tree on this issue. The single vowel pages in particular have got so long that even on a powerful computer, they are difficult for a reader to interact with. The problem is not so much the Lua memory limit but the fact that the pages are simply too long, Lua or no Lua. From this standpoint, the only solution is to split them, perhaps along the lines of User:This, that and the other/a, with appropriate code added to {{l}} etc so that when a "split entry" is linked to, the link goes to the appropriate subpage.
In general though, I'd definitely be in favour of anything that works to reduce memory usage and allows {{l-lite}} etc to be deleted! The generic {{head}} template and the {{g}} template are two more that could be considered for your {{reduce memory}} concept. This, that and the other (talk) 08:50, 4 January 2022 (UTC)[reply]
The correct solution for letter entries is Wiktionary:Votes/2020-07/Removing letter entries except Translingual, but unfortunately it failed to pass. — SURJECTION / T / C / L / 15:44, 4 January 2022 (UTC)[reply]
@Surjection: I think we have a good shot at passing that if we reproposed it as applying only to letters used (natively) in more than N (≈10) languages. Fytcha (talk) 16:05, 4 January 2022 (UTC)[reply]
@Surjection, This, that and the other, se isn't a letter and doesn't host translations, yet it's still running out of memory. I do think we should run Fytcha's suggested modification of the letter vote, but it's not a solution. The only long-term solution that has been proposed is WT:Per-language pages proposal. —Μετάknowledgediscuss/deeds 18:54, 4 January 2022 (UTC)[reply]
I don't think removing letter entries is likely to pass in any guise. Plus, (a) the opposers made some cogent arguments that I'm inclined to agree with, and (b) merging the letter entries was never going to solve the Lua memory errors in any case. If the {{reduce memory}} idea doesn't succeed on these very long entries, it would be worthwhile having a discussion or vote on splitting these entries. This, that and the other (talk) 00:49, 5 January 2022 (UTC)[reply]
Reading the per-language proposal again, I see that one hurdle it highlights is just how much time and effort would be needed to split all of our millions of pages, handle cases where a word has a slash in it already, etc, which brings to my mind something I think I've opined about before, which is that we don't need to split all of our millions of pages, because even if we eventually have thousands 😱 of entries that have memory errors, that's ... a ten-thousandth of a percent of our total number of entries. That tiny tail needn't wag the whole dog; we could just per-language split only those few pages which actually have memory issues. Or do a coarser split like This,that mocked up. I'd support either of those; memory errors are a severe problem in the few entries they affect, so even drastic changes to those entries should be on the table... - -sche (discuss) 01:24, 5 January 2022 (UTC)[reply]
I guess it's doable. I created {{q-lite}} as an experiment and it was actually possible to completely implement that template without using any Lua, albeit without support for arbitrarily many qualifiers. — SURJECTION / T / C / L / 12:45, 4 January 2022 (UTC)[reply]
I started testing something like this on Module:User:Surjection/invoker (Example wrapper module). It doesn't handle nested templates correctly yet, but I'm working on that. — SURJECTION / T / C / L / 17:15, 4 January 2022 (UTC)[reply]
Now it does: Special:Diff/65183310SURJECTION / T / C / L / 17:25, 4 January 2022 (UTC)[reply]
I also created Module:User:Surjection/wrapper which makes it easier to implement the double-brace templates, and hopefully it has a small enough footprint to be usable at least for some templates where hardcoding them (as {{tt}} does) would not be practical. — SURJECTION / T / C / L / 17:50, 4 January 2022 (UTC)[reply]
@Surjection Thanks. When would Module:User:Surjection/wrapper be needed? E.g. in {{col}} or high-numbered {{q}} params? In such case I could imagine writing template code to check for arguments likely requiring special handling and fall back to pure template code otherwise; e.g. {{#if:{{{4|}}}{{{5|}}}{{{6|}}}|<invoke wrapper>|<do pure template code>}}. Benwing2 (talk) 02:44, 5 January 2022 (UTC)[reply]
Yes, those are possible use cases. Having a pure template code fallback is also a good idea and would further reduce the memory footprint. I was more thinking about templates like {{compound}} that can have arbitrarily many elements, each of which can have their own glosses, sense IDs, etc. — SURJECTION / T / C / L / 12:10, 5 January 2022 (UTC)[reply]
Just had an idea to make {{multitrans}} and the like more efficient: enclose the thing in nowiki tags to prevent the inside from being interpreted as wikitext and then use mw.text.unstripNoWiki on it in the Lua module that implements the {{multitrans}}-like thing.
I've done this before on some of my list pages, but never thought about using it for {{multitrans}}.
Then we could use the original template names like {{t}} and {{t+}} instead of {{tt}} and {{tt+}} because they would be handled by the Lua module rather than the wikitext parser: {{multitrans-with-nowiki|<nowiki>* French: {{t+|fr|mot}}</nowiki>}} instead of {{multitrans|* French: {{tt+|fr|mot}} }}. This would probably speed up page parsing because the server would no longer have to reformat the templates with ⦃¦⦄. It would disable various features that assume that what's in nowiki tags isn't wikitext, like wikitext syntax highlighting, if you have that turned on in the editor. — Eru·tuon 23:20, 6 January 2022 (UTC)[reply]
@Erutuon How would we handle other templates inside of {{multitrans}}, or even in arguments to {{t}}/{{t+}}? Benwing2 (talk) 01:53, 7 January 2022 (UTC)[reply]
@Benwing2: Good question. That's the trouble with the idea. I'm thinking maybe we could expand embedded templates by recursively matching %b{} and translating common templates to module functions to reduce overhead and handling others with frame:preprocess(). It would be hard, but at least with translation sections the syntax is relatively restricted, so it might not be impossible. — Eru·tuon 20:35, 8 January 2022 (UTC)[reply]
A demonstration of the idea at Module:User:Erutuon/multitrans using Module:templateparser. Even though it still calls module functions to expand {{t}} and {{t+}}, it manages to go from 24 MB to 12 MB of Lua memory. — Eru·tuon 21:16, 8 January 2022 (UTC)[reply]
@Erutuon Very cool. Can you try this on some existing pages that use {{multitrans}} to see how much memory reduction there is in some real-world cases? It may depend a lot on whether and how much other languages are mentioned outside of the {{multitrans}} block(s). Some examples: red, wolf, grass, flower, four, etc. Benwing2 (talk) 02:26, 9 January 2022 (UTC)[reply]
@Benwing2: Unfortunately the module isn't quite ready for entries because Module:templateparser doesn't parse piped links. I got a pretty confusing error message for {{t|ja|{{...}}から[[見る|見]]た|...}} about an invalid gender because the pipe in the link was parsed as a parameter separator. There might be no piped links in some of those translation sections, but someone could add one at any time. So User:Surjection or I will have to implement piped link parsing in Module:templateparser before we can test Module:User:Erutuon/multitrans further. — Eru·tuon 18:55, 9 January 2022 (UTC)[reply]
Fixed: Special:Diff/65254978SURJECTION / T / C / L / 19:05, 9 January 2022 (UTC)[reply]
@Surjection: Thanks! @Benwing: I tried replacing {{multitrans}} with Module:User:Erutuon/multitrans in red (~30 MB -> ~26 MB in the English section) and wolf (~28 MB -> ~25 MB in the English section), grass (~30 MB -> ~27 MB over the whole page), flower (~29 MB -> ~25 MB in the English section), four (~29 MB -> ~28 MB). Pretty good results! I wasn't expecting it to be slightly better than the current {{multitrans}}. — Eru·tuon 20:33, 9 January 2022 (UTC)[reply]

Nupe templates[edit]

I've just started adding Nupe entries, but I wasn't sure how to do the headword-line templates (verb and noun) as there're a couple of considerations to make:

  • Having an optional plural parameter for nouns.
  • Having the tone marks of every lemma in the heading and not in the title, so tone marked links would lead to the untonemarked page. (The way it works for Yorùbá and Hausa)

@Metaknowledge - Would you be able to help? —⁠This unsigned comment was added by Oníhùmọ̀ (talkcontribs) at 17:29, 5 January 2022‎.

@Oníhùmọ̀: I'm excited to see some work on Nupe. I'll add plurals to {{nup-noun}} for you, but it looks like you already got {{nup-verb}}, and {{nup-pos}} to work on your own. What source are you using? I have Blench's dictionary, but that's it. By the way, you need to leave your signature in the same edit as a ping for it to actually work. —Μετάknowledgediscuss/deeds 07:06, 6 January 2022 (UTC)[reply]
Mi jin yèbo sánrányí (Thanks), I'm using Blench's dictionary too (as well as the one on plants), but unfortunately it doesn't conform to the proposed orthography conventions by I.S.G. Madugu. I also use a blog called edukonupe and I use audio as there're a few Nupe channels on YouTube and I live with a native speaker, so I'm able to confirm the tones. Then for grammar I mainly use Kandybowicz's work. Oníhùmọ̀ (talk) 09:38, 6 January 2022 (UTC)[reply]
@Oníhùmọ̀: Could you please add WT:About Nupe and detail the orthographic conventions that we should use? You can use WT:About Yoruba as a model, and let me know if you need anything. —Μετάknowledgediscuss/deeds 04:08, 7 January 2022 (UTC)[reply]
I've made WT:About Nupe now. There's a minor issue with linking, links with tone-marked syllabic nasals aren't leading to the right page, ǹná (mother) for example is leading to ǹna and not nna. Oníhùmọ̀ (talk) 00:30, 8 January 2022 (UTC)[reply]
@Oníhùmọ̀, Metaknowledge I tried to fix the handling of Nupe diacritics. It now unilaterally removes all acutes, graves, circumflexes, carons and macrons. Let me know if this is incorrect and I can go back to the old way of specifying per-character. Benwing2 (talk) 19:50, 8 January 2022 (UTC)[reply]
@Benwing2 @Metaknowledge, While we're at it, could we add these changes for Edo (bin), Igala (igl), & Nupe?
  • Edo (Taken from the Ẹdo standard orthography and other sources):
  • Sort order: a, b, d, e, ẹ, f, g, gb, gh, h, i, k, kh, kp, l, m, mw, n, nw, ny, o, ọ, p, r, rh, rr, s, t, u, v, vb, w, y, z.
  • Diacritics: ◌́ (acute accent used for high tone), ◌̀ (grave accent used for low tone), ◌̄ (macron accent used for downstepped high tone), ◌̏ (double grave accent used for downstepped low tone)
  • Sort order: a, b, ch, d, e, ẹ, f, g, gb, gw, h, i, j, k, kp, kw, l, m, n, ny, ñ, ñm, ñw, o, ọ, p, r, t, u, w, y
  • Diacritics: ◌́ (acute accent used for high tone), ◌̀ (grave accent used for low tone), ◌̄ (macron accent sometimes used for mid-tone or mid-high tone), ◌̇ (dot above used on ṅ to show an extra-high tone), ◌̍ (vertical line above used on n̍ as an alternative way of spelling ṅ as suggested by the standard orthography)
  • Sort order: a, b, c, d, dz, e, f, g, gb, h, i, j, k, l, m, n, o, p, r, s, sh, t, ts, u, v, w, y, z, zh.
For other Nigerian languages, I'll be back later with a more comprehensive list once I get through my backlog. Thank you! AG202 (talk) 03:30, 15 January 2022 (UTC)[reply]
@AG202 By sort order do you actually mean that e.g. for Edo, the order should be ga ... gz, gb, gh? Similarly that ẹa sorts after ez? For diacritics you mean that these should be stripped when creating page names? Benwing2 (talk) 03:53, 15 January 2022 (UTC)[reply]
@Benwing2 Re: sort order, yes, similar to how Yorùbá currently has ẹ̀bà is after ewé and agbábọ́ọ̀lù after Àgùàlà. Re: diacritics, yes as well, those are the diacritics that should be stripped from page names. Apologies for the confusion there. AG202 (talk) 04:18, 15 January 2022 (UTC)[reply]
@AG202 I made changes for those three languages above and pinged you on the changes. When you have a chance, please verify that they work correctly. Thanks! Benwing2 (talk) 04:55, 15 January 2022 (UTC)[reply]
@Benwing2 It all looks good! Thank you so so much once again! AG202 (talk) 05:17, 15 January 2022 (UTC)[reply]

Making Template:quote-book and Template:quote-hansard compatible with older LCCNs[edit]

Both Template:quote-book and Template:quote-hansard have parameters set out to accept LCCNs which are used to create permalinks of the style https://lccn.loc.gov/##########, but what an LCCN is has changed over time. Currently it is defined to mean "Library of Congress Control Number", but it has previously been used to mean "Library of Congress Catalogue Card Number". Under the catalogue card scheme there was a wider variety of styles for LCCNs, with assigned numbers including things like 99-1 and gm 71-2450. In general, LCCNs in the catalogue card scheme included hyphens, sometimes letter prefixes, and where of variable length, though always had less than eight digits. When the switch to newer control numbers happened, many were assigned new, standardized LCCNs in the Library of Congress' database. The newly assigned control numbers were formed by replacing the hyphen with the number of zeros necessary to bring the total number of digits to eight. Working with aforementioned examples, 99-1 was standardized as 99000001 and gm 71-2450 as gm 71002450 (note that the space is removed when creating the permalink). Would it be possible to modify the templates so that when provided with older catalogue card numbers the templates use the standardization process to generate valid permalinks? Thanks to Library of Congress reference librarian Elizabeth L. Brown and this page on numbers found in LC catalog records for helping me understand the standardization scheme. Thanks for any help and take care. —The Editor's Apprentice (talk) 23:10, 6 January 2022 (UTC)[reply]

@The Editor's Apprentice This can be done but I need an exact description of how to standardize older LCCN's. Benwing2 (talk) 01:52, 7 January 2022 (UTC)[reply]
@Benwing2: I am unsure how much more exact I can get, but I'll try. I'll focus on how the inputed older LCCNs should be transformed so that a valid permalink can be made. To begin, an older LCCN can have up to three parts, one optional and two mandatory. By optional I mean "does not exist in all older LCCNs" and by mandatory I mean "exists in all older LCCNs". I'm not sure if there is better terminology for those ideas. The first part is an optional prefix of one or two letters. The second part is required, is the first of two sets of digits, and is always two digits long. The third part is required, is the second of the two sets of digits, and is one to six digits long. If there is a letter prefix, it is separated from the rest of the LCCN by a space. The first digits of the LCCN are separated from the second set of digits by a hyphen. Using the example of "99-1", this old LCCN has no prefix, its first set of digits is "99" and its second "set" is "1". For "gm 71-2450", the prefix is "gm", the first set of digits "71", and the second set of digits is "2450". To create a valid permalink, start by removing any spaces that might be in the supplied LCCN. Next check if there is a hyphen, if there is then the LCCN is older. If there is no hyphen, the LCCN is newer and simply prepend https://lccn.loc.gov/, then your done. Given that the LCCN is older, count the number of digits in the given LCCN. Next, replace the hyphen in the LCCN with a number of zeros equal to 8 minus the number of digits already in the LCCN. Next prepend https://lccn.loc.gov/. The result should be a valid permalink. Using "99-1" as an example, no spaces are removed, a hyphen is found confirming its old, and three digits already exist. The hyphen is then replaced with five zeroes resulting in "99000001" and then the URL part is prended resulting in the valid permalink https://lccn.loc.gov/99000001 . Using "gm 71-2450" as an example, a space is removed resulting in "gm71-2450", a hyphen is found confirming its old, and six digits already exist. The hyphen is then replaced with two zeroes resulting in "gm71002450" and then the URL part is prended resulting in the valid permalink https://lccn.loc.gov/gm71002450 . Hope this helps. —The Editor's Apprentice (talk) 03:07, 7 January 2022 (UTC)[reply]
@The Editor's Apprentice Try it now. Benwing2 (talk) 04:58, 7 January 2022 (UTC)[reply]
It has created the correct permalinks in all of my tests. The one change that I would make is having the display for the link still be the LCCN in the old format as it inputted. Doing just generally feels more honest to me, might give the reader a bit more information, and might be useful in case the Library of Congress ever changes the format again. Thanks for the good work. —The Editor's Apprentice (talk) 05:30, 7 January 2022 (UTC)[reply]
@The Editor's Apprentice Done. Benwing2 (talk) 06:59, 7 January 2022 (UTC)[reply]

Wiktionary:Request_pages[edit]

This page (only) is generating a RDBMS error including when accessing any diffs in its history. Maybe temporary? -- GreenC (talk) 04:08, 7 January 2022 (UTC)[reply]

"A database query error: [9a1fa30c-f477-4f10-a7b2-a7a961442a01] 2022-01-07 04:09:16: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"
See [2]. Now causing database errors due to the use of DynamicPageList on the documentation page ([3]). DTLHS (talk) 04:15, 7 January 2022 (UTC)[reply]

Google Books quotation template generator[edit]

Hello,

The following script, when added to Greasemonkey on Firefox, will add a button on Google Books Search results page for easy quotation: [4]. You should still check that the info provided is valid; notably, it won't pick up chapter-specific titles and authors in books where that's relevant, and I've sometimes noticed Google Books provides incorrect page numbers (but usually they are right).

I haven't tested it with Tampermonkey on Chrome, but replacing GM.xmlHttpRequest with GM_xmlhttpRequest may help.

If any of you use Firefox with Greasemonkey and want to test it, feel free to report back with your opinions. 70.172.194.25 07:45, 7 January 2022 (UTC)[reply]

Screenshot to entice you: [5]. 70.172.194.25 07:49, 7 January 2022 (UTC)[reply]
Can confirm it works with Tampermonkey on Chrome after making the change I mentioned. 70.172.194.25 15:04, 7 January 2022 (UTC)[reply]
Thanks, 70.172.194.25. I tried it on Chrome, and it works quite well. It will certainly be quite useful. —Svārtava [tcur] 04:22, 8 January 2022 (UTC)[reply]
Works great thus far, thank you so much for this. This has always been my least favorite part about Wiktionary but this will now change. — Fytcha T | L | C 〉 04:29, 8 January 2022 (UTC)[reply]
I'm glad it was useful. Let me know if you have any suggestions and I can try to implement them. :)
Might spend a day this week making one for Google Scholar since that's also in frequent usage.
Minor thing, I noticed that it might be a good idea to have this @include line in addition to the one there:
// @include https://www.google.com/search?tbm=bks&*
That's because some of the templates around here, at least {{googles}}, use links that start with ?tbm=bks& instead of having &tbm=bks& in the middle like normal (although you can also just press the search button again in such cases). 70.172.194.25 04:34, 8 January 2022 (UTC)[reply]
It doesn't work here (no buttons): [6] I was unfortunately unable to debug the problem. — Fytcha T | L | C 〉 04:37, 8 January 2022 (UTC)[reply]
That is exactly the problem I mentioned in my last post. Adding the extra @include line should help. :) 70.172.194.25 04:38, 8 January 2022 (UTC)[reply]
That fixed it. Thanks a lot again! — Fytcha T | L | C 〉 04:41, 8 January 2022 (UTC)[reply]
Would it be possible to also add such a button to the in-book view? Even if querying and adding the passage automatically is not possible, the other parameters would still be worth it. Be aware that there are two different kinds of in-book views: Full preview and minimal preview. — Fytcha T | L | C 〉 16:53, 9 January 2022 (UTC)[reply]
Minor bug report: The button does nothing here for the result "Top banana - 3 Dec 1951 - Page 75". — Fytcha T | L | C 〉 16:57, 9 January 2022 (UTC)[reply]
The bug report is easily dealt with. New code, now with ISSN detection (you have to do the minor change for Chrome again): [7]. I'll respond to the feature request later. 70.172.194.25 18:19, 9 January 2022 (UTC)[reply]
Major update

I have added more features, including an ambitious interpretation of the one Fytcha requested above: it should work in search result mode, page preview/reader mode, and book information screen mode, on both the old and new versions of Google Books. Additional features include (inconsistent, based on Google's database) OCLC/LCCN detection, volume and issue numbers, and series titles. The code is here: [8]. This same code should work on both Firefox and Chrome. Because the code is more complex now, there may be bugs/undesirable behavior. Feel free to report anything. 70.172.194.25 04:54, 10 January 2022 (UTC)[reply]

Minor bugfix to the above. Sorry, I had to fix it since I noticed it so quickly (page numbers were not working for in-book view on the new version of Google Books). [9]. 70.172.194.25 06:01, 10 January 2022 (UTC)[reply]
Works great! Thank you again so much for this! — Fytcha T | L | C 〉 01:48, 11 January 2022 (UTC)[reply]

Google Scholar version[edit]

For Greasemonkey on Firefox: [10]. For Tampermonkey on Chrome: [11].

You have to click the normal "Cite" button, which now will pop up with the Wiktionary quotation format on top. Similar caveats apply as above. I also added a button to run a (couple second) longer check to see if a journal article is on CrossRef, which sometimes (but not always) gives the DOI and language, although you can also obtain that information elsewhere or omit it if deemed not worth it.

Let me know what you think. 70.172.194.25 10:47, 9 January 2022 (UTC)[reply]

Wow, nice, it works great. Thanks a million! Also, I think you should definitely create an account.Svārtava [tcur] 13:04, 9 January 2022 (UTC)[reply]

putting back no entries[edit]

It wouldn't let me revert calculusless to "no entry" because it stripped l3s. General Vicinity (talk) 00:44, 10 January 2022 (UTC)[reply]

Save the Date: Coolest Tool Award 2021: this Friday, 17:00 UTC[edit]

<languages />

Hello all,

The ceremony of the 2021 Wikimedia Coolest Tool Award will take place virtually on Friday 14 January 2022, 17:00 UTC.

This award is highlighting software tools that have been nominated by contributors to the Wikimedia projects. The ceremony will be a nice moment to show appreciation to our tool developers and maybe discover new tools!

Read more about the livestream and the discussion channels.

Thanks for joining! andre (talk) -08:02, 6 January 2022 (UTC)[reply]

Feasibility of a bot to line up columns in translations with the trans-mid template?[edit]

I just edited welcome because the list of translations for the interjections included far more lines in one column than another. It seems trivial to me as someone who couldn't make a bot to save his life to make a bot that would insert {{trans-mid}} in the middle of the list of translations, plus or minus a few entries. Is this something that someone can do and feels like is worth doing? Thanks. —Justin (koavf)TCM 09:42, 12 January 2022 (UTC)[reply]

@Koavf I'm pretty sure there used to be a bot that did exactly that, maybe it was run by User:Ruakh? Benwing2 (talk) 03:36, 13 January 2022 (UTC)[reply]
@Koavf I found it. See e.g. [12] on fraught, by User:NadandoBot, run by User:DTLHS. Benwing2 (talk) 07:00, 13 January 2022 (UTC)[reply]
Thanks! @DTLHS:, looks like it doesn't run anymore. Can it do this task? —Justin (koavf)TCM 07:05, 13 January 2022 (UTC)[reply]
I'm not interested in running it at this time. DTLHS (talk) 16:24, 13 January 2022 (UTC)[reply]
Me, either. It's just too hard to decide where the {{trans-mid}}, partly because of difficulty translating the wikitext into dimensions (different characters take up different amounts of space, to say nothing of templates and so on), and partly because line-wrapping depends on screen size. I prefer bot tasks where I can feel reasonably confident that they're making an improvement. Hopefully CSS support for balancing columns will become widely-enough available someday that we can get rid of {{trans-mid}} and have the browser handle this for us. (Though even in the meantime, maybe we can use some JS to achieve it? Not sure.) —RuakhTALK 02:32, 15 January 2022 (UTC)[reply]
@Ruakh what is the problem with current browser functionality re column balancing? According to caniuse.com, 97.9% of desktop users can reap the benefits of the column-fill CSS property, but in my tests, this isn't even needed - just applying column-count: 2 does the trick. What am I missing? This, that and the other (talk) 06:03, 15 January 2022 (UTC)[reply]
You may not be missing anything. I had Googled `css balanced columns` and found https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Columns/Spanning_Columns, which gave me the impression that it wasn't well-supported. But I suppose that page might be old, or too conservative, or something. —RuakhTALK 06:56, 15 January 2022 (UTC)[reply]
I made a mockup with translation tables divided into columns using pure CSS at User:This, that and the other/subject - what do you all think? This uses pure CSS to achieve the column effect, with the added bonus that it adapts the number of columns to what can comfortably be displayed on the user's screen. Most users will continue to see 2, while mobile users will see 1 and users on very large screens will see 3 columns. (The translation <table> element now only has one cell and is not needed, but I preserved it in the mockup so the translation-adder gadget didn't break.) This, that and the other (talk) 08:08, 15 January 2022 (UTC)[reply]
Hmm, I'm seeing only one column in Firefox 95 (even if I zoom out so much that there's room for six to eight columns). It works well in Chrome, though. If we can get it working as well for the other major browsers, it will definitely be an improvement over the status quo. :-) —RuakhTALK 09:07, 15 January 2022 (UTC)[reply]
I might have fixed it, but I would need an administrator to change the content model of "User:This,_that_and_the_other/subject/styles.css" to "Sanitized CSS" using Special:ChangeContentModel.... This, that and the other (talk) 09:29, 15 January 2022 (UTC)[reply]
Sorry, no dice: the system won't let me change the content model of a page in your user-space. Furthermore, when I tried copying the CSS to a page in my user-space and changing its content model to 'Sanitized CSS', it wouldn't let me do so because the hyphen-prefixed CSS properties aren't recognized.
So, I think there are two options here:
  1. If you're pretty confident that this is the desired CSS (like, there won't be several more rounds of testing and updates), then I can add it to MediaWiki:Common.css, just with the selector changed to .this-that-and-the-other .translations ul so that you can use it from your page without affecting existing pages until we're ready.
  2. For purposes of your own testing, you can use the HTML syntax for an unordered list instead of the wikitext syntax, and then put the CSS inside <ul style="...">. (I've just tested, and these properties are let through in that context. Dunno why 'Sanitized CSS' is so strict if the same properties are allowed in inline CSS, but whatever.)
RuakhTALK 11:13, 16 January 2022 (UTC)[reply]
Ah, you probably need to be an interface-admin to change the content model of someone else's user pages. I didn't think of that.
The problem with the raw <ul> tag is that MediaWiki automatically creates a new <ul> element to wrap the * list items. Using <li> tags instead of * would probably prevent this, but I'd rather not muck around with all the nested lists to change them to use <li> tags - that also runs the risk of the mockup diverging from real-world practice. The CSS is pretty simple and I don't anticipate any issues, so I'd be grateful if you could put the code in common.css so the mockup page can be tested by more users. Thanks @Ruakh! This, that and the other (talk) 11:38, 16 January 2022 (UTC)[reply]
Yes check.svg Done; see MediaWiki:Common.css?diff=65358919 for details. —RuakhTALK 19:56, 16 January 2022 (UTC)[reply]

Plopping this down here because the thread is getting long. It looks like column-width has been supported for almost 6 years by all the major browsers (the latecomer being Firefox and Firefox for Android on 2016-11-15), and the vender-prefixed versions have been supported for even longer. That probably falls within our compatibility obligations (mw:Compatibility#Browsers, and we've already been using the similar column-count properties in list templates like {{col3}} anyway. (For some reason the un-vendor-prefixed column-count was supported later — 2017-03-07 in Firefox — but again there were vendor-prefixed versions available much earlier.)

column-width might be better than column-count because it's less likely to squeeze the words in the columns if the viewport is narrow. But we should check how column-width treats translation terms that are too long; maybe we want to somehow select a column width that's wider than the longest translation? Might require translation box classes with several different widths where you can choose one and put it in {{trans-top}} or {{trans-top-see}} or {{checktrans-top}}. — Eru·tuon 21:59, 16 January 2022 (UTC)[reply]

I think column-width makes more sense than column-count for this, and indeed for all our columnar templates. I've never understood why we have {{col2}}, {{col3}} and {{col4}} as separate templates used without rhyme or reason - it would make far more sense (to me) to have a single {{col-box}} that sets a column width and, in doing so, adapts to the space available on the user's screen.
For the translations table at User:This, that and the other/subject I picked a column width of 35em, which seems adequate for typical translation tables. If translation lines are long, they will wrap - that is the current behaviour, and there is no risk of ambiguity thanks to the bullet at the start of every translation line. Moreover, bear in mind that 35em is a minimum. Hardly any users will ever see a column that is exactly 35em wide.
The only place I can think of that might require a wider column width is phrasebook entries, perhaps via a {{trans-top-phrasebook}} template. This, that and the other (talk) 00:27, 17 January 2022 (UTC)[reply]

falloir[edit]

fallant is given as present participle, but does not exist —⁠This unsigned comment was added by JohnWheater (talkcontribs).

@JohnWheater fallant is rare and perhaps obsolete (indeed, Molière used it) but it does seem to exist. To dispute the existence a word at Wiktionary, you may follow the process outlined at Wiktionary:Requests for verification/Non-English. This, that and the other (talk) 08:51, 13 January 2022 (UTC)[reply]

Category or some such for Language translations[edit]

It'd be nice to have a page where I could check English words with a given languages translations - people add some weird translations that are incorrect and it's hard to check all of them - if they were centralized it would be easier. It could be like xlanguage with translations. Vininn126 (talk) 18:06, 13 January 2022 (UTC)[reply]

Addendum: perhaps this could be a fullblown search engine, but that might require much more work. Users could look up content in glosses, or by gender, aspect, etc (really anything in the translation box), or for transliterations etc, if that sort of thing is posisble. Vininn126 (talk) 18:23, 13 January 2022 (UTC)[reply]
For the time being, there's this: [13]Fytcha T | L | C 〉 18:25, 13 January 2022 (UTC)[reply]
@Fytcha Translation subpages (created when the sheer number of translations causes "out of memory" errors) aren't in Category:English lemmas. Aside from that, it's very useful, and I now have the version without "incategory" bookmarked. So far I've found someone who was using {{t}} in etymology sections, and some other problems I wouldn't have found otherwise. Chuck Entz (talk) 03:58, 15 January 2022 (UTC)[reply]
Thanks! Vininn126 (talk) 20:25, 14 January 2022 (UTC)[reply]
Several people have requested a translation search engine and it seems kind of fun, so I'm working on a translation database to start with. At this point it can extract the glosses from {{trans-top}} and the translation information from {{t}} and {{t+}} and the rest. Hopefully someday I'll actually make a Toolforge site that uses it. — Eru·tuon 03:35, 15 January 2022 (UTC)[reply]

Edit filter on Citations:misguesstimate[edit]

I tried to add a quotation but it would not let me save. 70.172.194.25 05:17, 14 January 2022 (UTC)[reply]

For this reason (among a ton of other reasons) you should consider creating an account. —Svārtava [tcur] 05:33, 14 January 2022 (UTC)[reply]
Sorry for the inconvenience. The abuse filter entry is Special:AbuseLog/1254533; an unfortunate false positive. Autoconfirmed users are exempt from this filter. — Fytcha T | L | C 〉 11:47, 14 January 2022 (UTC)[reply]
Could you add it to the page? 70.172.194.25 16:32, 14 January 2022 (UTC)[reply]
Why don't you create an account? It's very easy and has a lot of advantages for regular contributors. —Svārtava [tcur] 16:38, 14 January 2022 (UTC)[reply]
Done. — Fytcha T | L | C 〉 17:37, 14 January 2022 (UTC)[reply]

Escaping spaces in url= in {{quote-journal}}[edit]

I tried both with a space and with %20. Anyone know the right way to do this? [14] 70.172.194.25 20:49, 15 January 2022 (UTC)[reply]

Never mind, there was another space I forgot to escape. I'm not sure why the template can't just do this automatically, though. 70.172.194.25 20:51, 15 January 2022 (UTC)[reply]
What is the issue exactly? Are you wanting it to automatically URL-encode the param in |url=? I think that will break existing URL's in this param, although it could potentially be hacked to do this only for spaces. OTOH it could be argued that the value of |url= should be a valid URL already. Benwing2 (talk) 04:30, 17 January 2022 (UTC)[reply]
It should at least emit an error if there is a non-trailing/leading space, instead of silently failing. The current behavior is to use the first part before the space as the URL and the rest as the link text, as in [https://example.org/Some/Unescaped Path/Here] => Path/Here. 70.172.194.25 05:02, 17 January 2022 (UTC)[reply]
I implemented this; hopefully it won't break anything. Benwing2 (talk) 06:18, 17 January 2022 (UTC)[reply]
Cool, much appreciated! Perhaps you could add a tracking category to find such errors (and see if it broke anything). 70.172.194.25 06:19, 17 January 2022 (UTC)[reply]
No need for a tracking category, they show up in CAT:E. I already fixed a few of them. Benwing2 (talk) 06:33, 17 January 2022 (UTC)[reply]

Flamenco tag[edit]

Hi. I tagged falseta as being a flamenco term, yet it doesn't show up in Category:es:Flamenco. Can someone please add flamenco as a valid tag? Br00pVain (talk) 14:20, 17 January 2022 (UTC)[reply]

@Br00pVain Try now. Benwing2 (talk) 18:54, 17 January 2022 (UTC)[reply]

Improving Module:de-headword[edit]

I've made this change which can already be seen to be working in Urlaubssemester. Unfortunately, I didn't quite figure out how to make use of the preexisting infrastructure in Module:de-noun#L-45; could someone help me do that? Also, it would be really nice if we could use the same parameters for {{de-noun}} and {{de-decl-noun-f}} (etc.) such that editors can just copy down the headword template, switch the template name and everything works. I believe some bot intervention will be necessary for that as the modules have different defaults, see the plural parameters in Urlaubssemester. — Fytcha T | L | C 〉 21:07, 17 January 2022 (UTC)[reply]

pl-pronunciation[edit]

Recently we've been updating our templates from {{pl-IPA}} to {{pl-pronunciation}} manually. Would it be possible to get a bot to do this? If a page has pl-IPA at all it's safe to replace with pl-p. Second of all, we've been having a problem with Module talk:pl-IPA nki and ngi in final position, and my lua skills are too poor to understand what the problem is. Third, I was pestering @Derbeth on his talk page if we could get a bot to automatically add audios from the commons to pages without audio. I dunno if anyone else would be able to help. If not, I can be patient there. Finally, would it be possible for me to get a list of Polish pages WITHOUT IPA, so that I could clean that up? Vininn126 (talk) 16:39, 18 January 2022 (UTC)[reply]

The list of Polish lemmas without IPA information can be found here: [15] If you want to get those that have IPA but not one of the automated templates, simply remove the minus sign in front of the last "insource".
I can do the bot thing if you give me a couple of weeks, I wanted to submit a bot account for some German cleanup soon anyway. — Fytcha T | L | C 〉 17:00, 18 January 2022 (UTC)[reply]
Awesome, thanks a ton. Vininn126 (talk) 17:31, 18 January 2022 (UTC)[reply]